Profiles Research Networking Software (RNS) is a free open source semantic web application which uses the VIVO ontology to generate searchable online profiles of an organization’s investigators (http://profiles.catalyst.harvard.edu). This poster describes a new feature called Group Profiles, which creates separate pages in the Profiles RNS website for centers, laboratories, projects, or other groups of people. This allows a group to share information about itself on the Profiles RNS website and link to the profile pages of its members. Group Profiles was developed as a collaboration between the Harvard Profiles RNS team and Christopher Shanahan and Chris Dorney from the Boston University Clinical Translational Science Institute (CTSI). Phase 1 of the Group Profiles project includes the following functionality: 1) Membership. Managers of Group Profile pages can affiliate individuals within the Profiles RNS website as members of the group. The list of members are shown on the Group Profile page; and, the groups a person is affiliated with are shown on his or her profile page in a My Groups module. 2) Custom Page Layout. Within the Profiles RNS framework, Group Profiles are a distinct object class, which means they can have different content modules and page layout than person profile pages. 3) Custom Modules. Certain Group Profile modules enable authorized group managers to post content directly to the Group Profile pages. Other modules pull content from member profile pages to display on the Group Profile page. The current list of modules include: (a) Photo (group logo, picture, etc.); (b) Welcome, About Us, and Contact text; (c) Open Social Gadgets (e.g., links to other groups or external websites); and (d) Publications. 4) Security. Group Profiles utilize the Profiles RNS security model, which enables group managers to control whether content is visible to the public or restricted to certain users. 5) Search. Group Profile page content is searchable through the website and the Profiles RNS APIs. 6) Open Data. Group Profile page content is integrated into the Profiles RNS semantic web architecture so that it can be exchanged with other applications (like VIVO) via linked open data. We plan to expand the functionality of Group Profiles in the future and hope that other institutions also contribute new modules.
Griffin Weber Associate Professor of Medicine and Biomedical Informatics, Harvard Medical School
Griffin Weber, M.D., Ph.D., is an Associate Professor of Medicine and Biomedical Informatics at Beth Israel Deaconess Medical Center (BIDMC) and Harvard Medical School (HMS). He invented an open source expertise discovery and networking tool for scientists called Profiles Research Networking Software (http://profiles.catalyst.harvard.edu). It automatically mines large datasets such as PubMed and NIH ExPORTER to find information about investigators' research areas and identify ways in which they are connected. It presents these connections using temporal, geospatial, and network visualizations. The software has numerous applications, ranging from finding individual collaborators and mentors to understanding the dynamics of an entire research community. Profiles RNS is now used at dozens of institutions worldwide, including universities, pharmaceutical companies, Federal agencies, and physician networks. Dr. Weber received his M.D. and Ph.D. in computer science from Harvard University in 2007. While still a student, he became the first Chief Technology Officer of Harvard Medical School and built an educational web portal that provides interactive online content to over 500 courses. His past research projects also include analyzing DNA microarrays, modeling the growth of breast cancer tumors, and creating algorithms for predicting life expectancy. He also helped develop another widely adopted open source platform called Informatics for Integrating Biology and the Bedside (i2b2), which enables investigators to query, analyze, and visualize large clinical repositories for hypothesis testing and identification of patients for clinical trials.
Warren Kibbe Chief Data Officer, Duke Cancer Institute, Duke University
Warren A. Kibbe, PhD, is chief for Translational Biomedical Informatics in the Department of Biostatistics and Bioinformatics and Chief Data Officer for the Duke Cancer Institute. He joined the Duke University School of Medicine in August after serving as the acting deputy director of the National Cancer Institute (NCI) and director of the NCI’s Center for Biomedical Informatics and Information Technology where he oversaw 60 federal employees and more than 600 contractors, and served as an acting Deputy Director for NCI. As an acting Deputy Director, Dr. Kibbe was involved in the myriad of activities that NCI oversees as a research organization, as a convening body for cancer research, and as a major funder of cancer research, funding nearly $4B US annually in cancer research throughout the United States. A recognized national leader in biomedical informatics and data sharing, Dr. Kibbe has been instrumental in efforts to speed scientific discovery and facilitate translational research by using IT, informatics and data science to address complex research challenges. At the NCI, his responsibilities included defining and implementing the NCI Data Commons and a vision for a National Cancer Research Data Ecosystem. He was instrumental in establishing the NCI’s partnership with the U.S. Department of Energy (DOE) to jointly develop the next generation of high performance computing architectures to address important questions in cancer biology, treatment, development of resistance in patients, and understanding outcomes. Most recently, Dr. Kibbe played a pivotal role in coordinating the data sharing scientific components of the Cancer Moonshot initiative. Prior to joining the NCI, Dr. Kibbe was a professor of Health and Biomedical Informatics in the Feinberg School of Medicine and the director of Cancer Informatics and CIO for the Robert H. Lurie Comprehensive Cancer Center at Northwestern University. InformationWeek named Dr. Kibbe a Top 25 Innovative Healthcare CIO for 2012. In 2017, he was recognized by FedHealthIT as one of the ‘Top 100 Executives and Leaders’ in federal healthcare IT. Dr. Kibbe received his PhD from Caltech and completed his postdoctoral fellowship at the Max-Planck-Institute in Göttingen, Germany.
Jan Fransen Service Lead for Research Information Management and Discovery Systems, University of Minnesota
Jan Fransen is the Service Lead for Research Information Management and Discovery Systems for University of Minnesota Libraries in the Twin Cities. In that role, Jan works across Libraries divisions and with campus partners to provide library systems that save researchers' and students' time and improve their access to the materials they need to get to their next steps.
Data mining coupled to network analysis has been successfully used as a digital methodology to study research collaborations and knowledge flow associated with drug development. To enable a broad range of quantitative studies based on this approach, we have developed Enhanced Research Network Information Environment (ERNIE), a scalable cloud-based knowledge platform that integrates free data drawn from public sources as well as licensed data from commercially available ones. ERNIE is presented as an accessible template for repositories that can be used to support expert qualitative assessments, while offering burden reduction and access to integrated data. Analytical workflows in ERNIE are partially automated to enable expert input at critical stages. To facilitate adoption, reuse and extensibility, ERNIE was built with open source tools and its modular design enables the facile addition, deletion, or substitution of data sources. ERNIE is cloud-based and consists of a PostgreSQL database in the CentOS 7.4 Linux environment, with additional virtual machines providing access to Solr 7.1, Spark, and neo4j. The volume of data in ERNIE is roughly 4 Tb and is refreshed weekly through automated ETL processes. Interactive users have access to Python, SQL, R, Java, Jupyter notebook, and standard Linux utilities. To demonstrate the capabilities of ERNIE, we report the results of seven case studies that span opioid addiction, pharmacogenomics, target discovery, behavioral interventions, and drug development. In these studies, we mine and analyze data from policy documents, regulatory approvals, research grants, bibliographic and patent databases, and clinical trials, to document collaborations and identify influential research accomplishments.
Samet Kesercia , NET ESolutions Corporation
Data mining coupled to network analysis has been successfully used as a digital methodology to study research collaborations and knowledge flow associated with drug development. To enable a broad range of quantitative studies based on this approach, we have developed Enhanced Research Network Information Environment (ERNIE), a scalable cloud-based knowledge platform that integrates free data drawn from public sources as well as licensed data from commercially available ones. ERNIE is presented as an accessible template for repositories that can be used to support expert qualitative assessments, while offering burden reduction and access to integrated data. Analytical workflows in ERNIE are partially automated to enable expert input at critical stages. To facilitate adoption, reuse and extensibility, ERNIE was built with open source tools and its modular design enables the facile addition, deletion, or substitution of data sources. ERNIE is cloud-based and consists of a PostgreSQL database in the CentOS 7.4 Linux environment, with additional virtual machines providing access to Solr 7.1, Spark, and neo4j. The volume of data in ERNIE is roughly 4 Tb and is refreshed weekly through automated ETL processes. Interactive users have access to Python, SQL, R, Java, Jupyter notebook, and standard Linux utilities. To demonstrate the capabilities of ERNIE, we report the results of seven case studies that span opioid addiction, pharmacogenomics, target discovery, behavioral interventions, and drug development. In these studies, we mine and analyze data from policy documents, regulatory approvals, research grants, bibliographic and patent databases, and clinical trials, to document collaborations and identify influential research accomplishments.
Avon Davey , NET ESolutions Corporation
Data mining coupled to network analysis has been successfully used as a digital methodology to study research collaborations and knowledge flow associated with drug development. To enable a broad range of quantitative studies based on this approach, we have developed Enhanced Research Network Information Environment (ERNIE), a scalable cloud-based knowledge platform that integrates free data drawn from public sources as well as licensed data from commercially available ones. ERNIE is presented as an accessible template for repositories that can be used to support expert qualitative assessments, while offering burden reduction and access to integrated data. Analytical workflows in ERNIE are partially automated to enable expert input at critical stages. To facilitate adoption, reuse and extensibility, ERNIE was built with open source tools and its modular design enables the facile addition, deletion, or substitution of data sources. ERNIE is cloud-based and consists of a PostgreSQL database in the CentOS 7.4 Linux environment, with additional virtual machines providing access to Solr 7.1, Spark, and neo4j. The volume of data in ERNIE is roughly 4 Tb and is refreshed weekly through automated ETL processes. Interactive users have access to Python, SQL, R, Java, Jupyter notebook, and standard Linux utilities. To demonstrate the capabilities of ERNIE, we report the results of seven case studies that span opioid addiction, pharmacogenomics, target discovery, behavioral interventions, and drug development. In these studies, we mine and analyze data from policy documents, regulatory approvals, research grants, bibliographic and patent databases, and clinical trials, to document collaborations and identify influential research accomplishments.
Alexander Pico , Gladstone Institutes
Data mining coupled to network analysis has been successfully used as a digital methodology to study research collaborations and knowledge flow associated with drug development. To enable a broad range of quantitative studies based on this approach, we have developed Enhanced Research Network Information Environment (ERNIE), a scalable cloud-based knowledge platform that integrates free data drawn from public sources as well as licensed data from commercially available ones. ERNIE is presented as an accessible template for repositories that can be used to support expert qualitative assessments, while offering burden reduction and access to integrated data. Analytical workflows in ERNIE are partially automated to enable expert input at critical stages. To facilitate adoption, reuse and extensibility, ERNIE was built with open source tools and its modular design enables the facile addition, deletion, or substitution of data sources. ERNIE is cloud-based and consists of a PostgreSQL database in the CentOS 7.4 Linux environment, with additional virtual machines providing access to Solr 7.1, Spark, and neo4j. The volume of data in ERNIE is roughly 4 Tb and is refreshed weekly through automated ETL processes. Interactive users have access to Python, SQL, R, Java, Jupyter notebook, and standard Linux utilities. To demonstrate the capabilities of ERNIE, we report the results of seven case studies that span opioid addiction, pharmacogenomics, target discovery, behavioral interventions, and drug development. In these studies, we mine and analyze data from policy documents, regulatory approvals, research grants, bibliographic and patent databases, and clinical trials, to document collaborations and identify influential research accomplishments.
Dmitriy Korobskiy , NET ESolutions Corporation
Data mining coupled to network analysis has been successfully used as a digital methodology to study research collaborations and knowledge flow associated with drug development. To enable a broad range of quantitative studies based on this approach, we have developed Enhanced Research Network Information Environment (ERNIE), a scalable cloud-based knowledge platform that integrates free data drawn from public sources as well as licensed data from commercially available ones. ERNIE is presented as an accessible template for repositories that can be used to support expert qualitative assessments, while offering burden reduction and access to integrated data. Analytical workflows in ERNIE are partially automated to enable expert input at critical stages. To facilitate adoption, reuse and extensibility, ERNIE was built with open source tools and its modular design enables the facile addition, deletion, or substitution of data sources. ERNIE is cloud-based and consists of a PostgreSQL database in the CentOS 7.4 Linux environment, with additional virtual machines providing access to Solr 7.1, Spark, and neo4j. The volume of data in ERNIE is roughly 4 Tb and is refreshed weekly through automated ETL processes. Interactive users have access to Python, SQL, R, Java, Jupyter notebook, and standard Linux utilities. To demonstrate the capabilities of ERNIE, we report the results of seven case studies that span opioid addiction, pharmacogenomics, target discovery, behavioral interventions, and drug development. In these studies, we mine and analyze data from policy documents, regulatory approvals, research grants, bibliographic and patent databases, and clinical trials, to document collaborations and identify influential research accomplishments.
George Chacko , NET ESolutions Corporation
At Duke University, we have worked to implement Symplectic Elements in a way that not only bolsters VIVO (Scholars@Duke), but also tightly integrates with a variety of university initiatives, and aims to meet the needs of individuals and a number of institutional partners, as well as the university overall. We'll share how we have fine-tuned our Elements implementation to support these goals. Now in our fifth year with Elements, we have recently completed a new wave of improvements to our implementation. Particularly, we have revisited how Elements integrates with VIVO, DSpace, Altmetrics, and ORCID. In this presentation, we'll discuss how these technologies have helped create a research systems feedback loop that has opened the door for new investments. We will demonstrate some of these new features and discuss how they have been received by the Duke community, as well as plans for future enhancements. We will cover everything from optimizing data loads to promoting open access to defining research impact. We'll also discuss our ongoing considerations for user experience and data stewardship.
Damaris Murry , Duke University
At Duke University, we have worked to implement Symplectic Elements in a way that not only bolsters VIVO (Scholars@Duke), but also tightly integrates with a variety of university initiatives, and aims to meet the needs of individuals and a number of institutional partners, as well as the university overall. We'll share how we have fine-tuned our Elements implementation to support these goals. Now in our fifth year with Elements, we have recently completed a new wave of improvements to our implementation. Particularly, we have revisited how Elements integrates with VIVO, DSpace, Altmetrics, and ORCID. In this presentation, we'll discuss how these technologies have helped create a research systems feedback loop that has opened the door for new investments. We will demonstrate some of these new features and discuss how they have been received by the Duke community, as well as plans for future enhancements. We will cover everything from optimizing data loads to promoting open access to defining research impact. We'll also discuss our ongoing considerations for user experience and data stewardship.
Paolo Mangiafico , Duke University
User needs and requirements are one of the fundamental areas where we can engage the VIVO community in order to better understand how the VIVO software can evolve to provide improved services to its users. We propose a panel highlighting examples of obtaining researcher and faculty feedback on VIVO using both informal and formal user-centered methods. We will share what we have done to gather input from users, how this work has informed or could inform our design and development for our individual VIVO instances, and how our different use cases and user stories could help inform future directions for VIVO. We will encourage discussion about what others in the VIVO community or related research information management communities see as potentially important directions for design and development. EarthCollab is a National Science Foundation EarthCube grant-funded project which seeks to use VIVO to model and interlink information about GeoScience projects, data, and contributors. Feedback and evaluations from researchers and faculty on VIVO instances developed as part of this project were obtained through multiple methods, including an in-person one-day workshop, a survey, usability testing, a focus group, and task-oriented user sessions and interviews with scientific stakeholders. This set of feedback helped inform the design of the project’s two VIVO instances and shed light on some of the ontological needs for the project. We will provide a summary of approaches and results of our work to date. The German National Library of Science and Technology (TIB) has used multiple methods to research user needs in research information systems. We conducted a study about user behavior with regard to research information systems using semi-structured interviews with 30 scientists from both universities and non-university research institutions. Additionally, we conducted a survey, which included CRIS related questions, of 1,460 researchers about the information and publication behavior of researchers in science and technology. Furthermore, the TIB hosted the 2nd VIVO workshop with 40 participants from various German-speaking universities and institutions. Part of this workshop was devoted to having participants discuss and prioritize future work in VIVO development. The report from this workshop states that “Improved functionalities for research reporting are heavily desired here.” We will discuss the overall results from this in addition to other feedback and impressions we have received from institutions in Germany who are exploring the use of VIVO. The Scholars@Duke team holds quarterly (previously monthly) user group meetings where they engage with the Scholars and Elements user communities. Additionally, the team holds annual faculty focus group lunches to solicit faculty feedback, conducts usability testing with faculty and power users, engages in multiple meetings with faculty members and researchers one-on-one, and regularly demonstrates the Scholars application to researchers and faculty. The Scholars team also leverages user metrics from Google Analytics and Tableau to augment end user feedback to further inform our design decisions. As part of this panel, we will discuss the multiple approaches used to gather feedback and how this feedback, and user metrics, informed design or spurred development directions.
Lamont Cannon , Duke University
Developing an immersive and identifiable visual brand is an essential component to growing a community of loyal and excited users. Fortunately, understanding the key ingredients to developing your visual brand has never been as straightforward thanks to an abundance of online resources and tools. In this session, we’ll dive into actionable techniques and strategies to promote, grow, and maintain an engaging community. • Better understand the significance of marketing in content • Recognize design trends that prioritize engagement and better align call to actions • Identify opportunities to streamline messaging and production processes • Examine visual and interactive signals to build trust in the experience and product
Mitchell Melkonian , Duke University
Developing an immersive and identifiable visual brand is an essential component to growing a community of loyal and excited users. Fortunately, understanding the key ingredients to developing your visual brand has never been as straightforward thanks to an abundance of online resources and tools. In this session, we’ll dive into actionable techniques and strategies to promote, grow, and maintain an engaging community. • Better understand the significance of marketing in content • Recognize design trends that prioritize engagement and better align call to actions • Identify opportunities to streamline messaging and production processes • Examine visual and interactive signals to build trust in the experience and product
Julia Trimmer , Duke University
The VIVO code base has grown over 15 years to more than half a million lines of Enterprise Java code. Experience has shown a steep learning curve for new developers especially front end developers, and difficulty in integrating newer web development technologies and approaches. Is there an opportunity to experiment with new technologies and techniques that are easier for new developers to dive into? The VIVO Product Evolution group is leading an effort to turn this opportunity into a reality. The vision is to prototype an agile web/mobile application that showcases the researchers, units, and scholarly works of an institution with their branding. This workshop will be a working session for the VIVO Product Evolution group, but also an occasion to engage with other interested VIVO community members. The workshop will include updates and discussion involving the group leadership and current subgroups, lightning talks exploring new technologies and methods, and breakout sessions for the subgroups. The current subgroups: * Representing VIVO Data * Functional requirements * Implementing the presentation layer Technologies, standards, and approaches under evaluation: * JSON and JSON-LD * SOLR and ElasticSearch * GraphQL * Schema.org as well as other scholarly information and data models (CERIF, CASRAI, COAR, CD2H) * Javascript Frameworks (React, Angular, Vue, etc) * Modern Agile principles
Richard Outten , Duke University
Collaboration is essential at all stages of the scientific process at Duke. However, at such a large, diverse university, finding collaborators and analyzing past collaborations can be a cumbersome process. This project seeks to ease these challenges. We expand upon current visualizations on the Scholars@Duke website and provide new data visualizations for greater insight into collaboration. Primarily, this includes analysis of scholars based on similarity in the topics of past publications, determined from titles and keywords associated with each publication.
Esko Brummel , Duke University
Collaboration is essential at all stages of the scientific process at Duke. However, at such a large, diverse university, finding collaborators and analyzing past collaborations can be a cumbersome process. This project seeks to ease these challenges. We expand upon current visualizations on the Scholars@Duke website and provide new data visualizations for greater insight into collaboration. Primarily, this includes analysis of scholars based on similarity in the topics of past publications, determined from titles and keywords associated with each publication.
John Behart , Duke University
In 2016, the Scholars@Cornell project was initiated aiming to advance the visibility and accessibility of Cornell scholarship and to preserve them for future generations. However, in data life cycle, data preservation and providing access to the recorded data is not the final stage. Data stored in a database is merely a record and can be of use only if human experience and insight is applied to it, data analysis is performed and data is transformed into a knowledge. The faculty and publication data is capable of revealing much more about patterns and dynamics of scholarship and the institution. Such data can support universities in their systems for managing faculty information, scholar's websites, faculty reporting and strategic decisions in general. We explore the scholarship data from the lens of a scholar, academic unit and an institutions. Unlike systems that provide web pages of researcher profiles using lists and directory-style metaphors, our work explores the power of graph analytics and infographics for navigating a rich semantic graph of scholarly data. We believe that the scholarship data, accessible in RDF format through VIVO webpages, is not easy to reuse, specifically by the software developers who have limited knowledge of semantic technologies and the VIVO data model. In Scholars@Cornell, the scholarship data is open for reuse in different ways. The data can be accessed via Data Distribution API in RDF or in JSON format. The infographics built using D3 javascript libraries can be embedded on different institutional websites. Additionally, new web applications can be developed that use scholarship knowledge graph, showcasing research areas and expertise. In this presentation, I will present an overview of the project, lessons learnt and will emphasis on data reuse and data analysis. I will discuss about our journey, how we moved from counting list items to connected graph, from data list views to data analysis and from data in peace to data in use.
Cornell University Libraries is developing a closer integration between its VIVO implementation and ORCID. The Scholars-ORCID Profile Update Service (SOPUS) is a pair of locally-developed applications that let faculty members push their publications from Scholars@Cornell to their ORCID profile. The Scholars@Cornell project collects publication data from a variety of sources using Symplectic Elements. This data is collated, curated and merged into “uber-records” using a suite of locally-developed software. The uber-records are transformed into RDF and are published on the Scholars@Cornell website. Faculty members then have the option of pushing their publication data from Scholars to ORCID. The SOPUS web service (SOPUS-WS) allows a faculty member to navigate from their Scholars profile page to their ORCID record, creating a chain of authorization so their publication data can be pushed to ORCID. When additional data is entered into Scholars, a batch process using the SOPUS command line application (SOPUS-CL) will update the ORCID records of all faculty members with active authorizations. SOPUS begins with an upgrade to the existing VIVO-ORCID integration toolkit. This upgraded toolkit is de-coupled from the VIVO application and joined with its own persistence layer to store long-lasting authorization tokens and transaction histories. The result is then wrapped in two activation layers, so it can be used as either an interactive web service or a command-line application. In this session, we will demonstrate the interactive authorization process (SOPUS-WS) and the batch update process (SOPUS-CL). We will discuss the user experience, the software components, and the data elements. We will show how these processes are integrated with VIVO, and discuss how other sites can add these processes to their own VIVO installations.
Cornell University is a decentralized institution where every college and school uses its own means and procedures to record their faculty’s publications. Few of them rely on institutional repositories such as Digital Commons from bepress; while, others use faculty reporting tools such as Activity Insight from Digital Measures or Symplectic Elements from Digital Science. In this presentation, I will discuss a case study of College of Agriculture and Life Sciences (CALS) that currently use Activity Insight (AI) for their faculty reporting needs. Every year during faculty reporting season, faculty report their research contributions of the past one year. In College of Agriculture and Life Sciences (CALS), different strategies are used to collect publication data from faculty. Faculty can either i) provide their up to date CVs and an admin staff from the college may read the CVs and manually enter publications data in the reporting system, ii) faculty can copy/paste publications list from their CVs and enter them as a single text blob in a free text template provided by the CALS administration, or iii) faculty can themselves log in to the reporting system and enter their publications in a publication template form. In all three options, publications are entered manually into the faculty reporting system. Such manually entered data is prone to errors and many examples have been found where manually entered citation data do not reflect the truth. Some of the noticed errors include incorrect journal name, incorrect ISSN/EISSN numbers, mistakes in DOIs, incorrect list/order of authors etc. Such dirty citation data cannot be used for data analysis or future strategic discussions. In Scholars@Cornell project, we use uberization module to clean such dirty data. First, we load dirty publication data from Activity Insight (AI) to Symplectic Elements as an institutional feed. In cases where the loaded publication is already harvested by Symplectic Elements via upstream sources (such as WoS, Pubmed, Scopus), the AI publication become another record in the existing publication object. In scenarios where the AI publication is the first record in Elements, one may re-run the search for the faculty so that the citation data for the same publication is harvested from upstream sources as well. Once step two is completed, next step is to extract publication objects from Elements, merging data from different sources (i.e., one record from each source) and creating a single record – “uber record” for each article. For creation of an uber record, we ranked the citation data sources based on the experience and intuition of two senior Cornell librarians and started with the metadata from the source they considered best. The uberization module merges the citation data from different publication records (including AI record) in a way that it creates a single record which is clean and comprises best of the best citation data. After passing through the data validation, uber records are transformed into an RDF graph and loaded into Scholars@Cornell.
In addition to the APIs of large abstract and indexing sources, there are a number of open APIs available to search, harvest and download citation metadata. Few of these citation APIs are CrossRef, PubMed and DBLP APIs. I will discuss OpenHarvester - an interactive tool that process result sets, harvested using CrossRef, DBLP and PubMed APIs, and uses a simple algorithm that refines the result set using a recursive approach. In future, other APIs such as Scopus, Web of Science and Dimensions APIs, may also be included. This is a preliminary work. The prototype works in two (separate) steps i.e., first, downloading potential publications for a person from a database and second, processing the result set and claiming the precise publications. Claimed publications can then be saved in RDF (or CSV) and pushed to a VIVO instance. Cornell is interested in collaboration and partnership with people and organizations that share the similar interests. In addition, we would like to collaborate with the teams of existing products, such as ReCiter from Weill Cornell and Harvard API to find People and Publication etc.
Muhammad Javed , Cornell University
Much of the VIVO code base is understood only by a few original core developers. For VIVO to grow and thrive, we need new developers with new energy and new ideas. But where will they come from? The goal of this workshop is to provide developers with a practical, hands-on introduction to selected VIVO development approaches and debugging methods. We also hope to highlight the process for contributing back to the VIVO core code. The workshop is intended to encourage further developer engagement in the VIVO community and to help begin training developers and build a broader base of VIVO committers. Attendees will be provided a Virtual Machine with VIVO and the Eclipse IDE editor prior to the workshop and will be led through the workshop by VIVO developers
In 2016, the Scholars@Cornell project was initiated aiming to advance the visibility and accessibility of Cornell scholarship and to preserve them for future generations. However, in data life cycle, data preservation and providing access to the recorded data is not the final stage. Data stored in a database is merely a record and can be of use only if human experience and insight is applied to it, data analysis is performed and data is transformed into a knowledge. The faculty and publication data is capable of revealing much more about patterns and dynamics of scholarship and the institution. Such data can support universities in their systems for managing faculty information, scholar's websites, faculty reporting and strategic decisions in general. We explore the scholarship data from the lens of a scholar, academic unit and an institutions. Unlike systems that provide web pages of researcher profiles using lists and directory-style metaphors, our work explores the power of graph analytics and infographics for navigating a rich semantic graph of scholarly data. We believe that the scholarship data, accessible in RDF format through VIVO webpages, is not easy to reuse, specifically by the software developers who have limited knowledge of semantic technologies and the VIVO data model. In Scholars@Cornell, the scholarship data is open for reuse in different ways. The data can be accessed via Data Distribution API in RDF or in JSON format. The infographics built using D3 javascript libraries can be embedded on different institutional websites. Additionally, new web applications can be developed that use scholarship knowledge graph, showcasing research areas and expertise. In this presentation, I will present an overview of the project, lessons learnt and will emphasis on data reuse and data analysis. I will discuss about our journey, how we moved from counting list items to connected graph, from data list views to data analysis and from data in peace to data in use.
Cornell University Libraries is developing a closer integration between its VIVO implementation and ORCID. The Scholars-ORCID Profile Update Service (SOPUS) is a pair of locally-developed applications that let faculty members push their publications from Scholars@Cornell to their ORCID profile. The Scholars@Cornell project collects publication data from a variety of sources using Symplectic Elements. This data is collated, curated and merged into “uber-records” using a suite of locally-developed software. The uber-records are transformed into RDF and are published on the Scholars@Cornell website. Faculty members then have the option of pushing their publication data from Scholars to ORCID. The SOPUS web service (SOPUS-WS) allows a faculty member to navigate from their Scholars profile page to their ORCID record, creating a chain of authorization so their publication data can be pushed to ORCID. When additional data is entered into Scholars, a batch process using the SOPUS command line application (SOPUS-CL) will update the ORCID records of all faculty members with active authorizations. SOPUS begins with an upgrade to the existing VIVO-ORCID integration toolkit. This upgraded toolkit is de-coupled from the VIVO application and joined with its own persistence layer to store long-lasting authorization tokens and transaction histories. The result is then wrapped in two activation layers, so it can be used as either an interactive web service or a command-line application. In this session, we will demonstrate the interactive authorization process (SOPUS-WS) and the batch update process (SOPUS-CL). We will discuss the user experience, the software components, and the data elements. We will show how these processes are integrated with VIVO, and discuss how other sites can add these processes to their own VIVO installations.
Jim Blake , Cornell University
In 2016, the Scholars@Cornell project was initiated aiming to advance the visibility and accessibility of Cornell scholarship and to preserve them for future generations. However, in data life cycle, data preservation and providing access to the recorded data is not the final stage. Data stored in a database is merely a record and can be of use only if human experience and insight is applied to it, data analysis is performed and data is transformed into a knowledge. The faculty and publication data is capable of revealing much more about patterns and dynamics of scholarship and the institution. Such data can support universities in their systems for managing faculty information, scholar's websites, faculty reporting and strategic decisions in general. We explore the scholarship data from the lens of a scholar, academic unit and an institutions. Unlike systems that provide web pages of researcher profiles using lists and directory-style metaphors, our work explores the power of graph analytics and infographics for navigating a rich semantic graph of scholarly data. We believe that the scholarship data, accessible in RDF format through VIVO webpages, is not easy to reuse, specifically by the software developers who have limited knowledge of semantic technologies and the VIVO data model. In Scholars@Cornell, the scholarship data is open for reuse in different ways. The data can be accessed via Data Distribution API in RDF or in JSON format. The infographics built using D3 javascript libraries can be embedded on different institutional websites. Additionally, new web applications can be developed that use scholarship knowledge graph, showcasing research areas and expertise. In this presentation, I will present an overview of the project, lessons learnt and will emphasis on data reuse and data analysis. I will discuss about our journey, how we moved from counting list items to connected graph, from data list views to data analysis and from data in peace to data in use.
Tim Worrall , Cornell University
Cornell University is a decentralized institution where every college and school uses its own means and procedures to record their faculty’s publications. Few of them rely on institutional repositories such as Digital Commons from bepress; while, others use faculty reporting tools such as Activity Insight from Digital Measures or Symplectic Elements from Digital Science. In this presentation, I will discuss a case study of College of Agriculture and Life Sciences (CALS) that currently use Activity Insight (AI) for their faculty reporting needs. Every year during faculty reporting season, faculty report their research contributions of the past one year. In College of Agriculture and Life Sciences (CALS), different strategies are used to collect publication data from faculty. Faculty can either i) provide their up to date CVs and an admin staff from the college may read the CVs and manually enter publications data in the reporting system, ii) faculty can copy/paste publications list from their CVs and enter them as a single text blob in a free text template provided by the CALS administration, or iii) faculty can themselves log in to the reporting system and enter their publications in a publication template form. In all three options, publications are entered manually into the faculty reporting system. Such manually entered data is prone to errors and many examples have been found where manually entered citation data do not reflect the truth. Some of the noticed errors include incorrect journal name, incorrect ISSN/EISSN numbers, mistakes in DOIs, incorrect list/order of authors etc. Such dirty citation data cannot be used for data analysis or future strategic discussions. In Scholars@Cornell project, we use uberization module to clean such dirty data. First, we load dirty publication data from Activity Insight (AI) to Symplectic Elements as an institutional feed. In cases where the loaded publication is already harvested by Symplectic Elements via upstream sources (such as WoS, Pubmed, Scopus), the AI publication become another record in the existing publication object. In scenarios where the AI publication is the first record in Elements, one may re-run the search for the faculty so that the citation data for the same publication is harvested from upstream sources as well. Once step two is completed, next step is to extract publication objects from Elements, merging data from different sources (i.e., one record from each source) and creating a single record – “uber record” for each article. For creation of an uber record, we ranked the citation data sources based on the experience and intuition of two senior Cornell librarians and started with the metadata from the source they considered best. The uberization module merges the citation data from different publication records (including AI record) in a way that it creates a single record which is clean and comprises best of the best citation data. After passing through the data validation, uber records are transformed into an RDF graph and loaded into Scholars@Cornell.
Joseph McEnerney , Cornell University
Much of the VIVO code base is understood only by a few original core developers. For VIVO to grow and thrive, we need new developers with new energy and new ideas. But where will they come from? The goal of this workshop is to provide developers with a practical, hands-on introduction to selected VIVO development approaches and debugging methods. We also hope to highlight the process for contributing back to the VIVO core code. The workshop is intended to encourage further developer engagement in the VIVO community and to help begin training developers and build a broader base of VIVO committers. Attendees will be provided a Virtual Machine with VIVO and the Eclipse IDE editor prior to the workshop and will be led through the workshop by VIVO developers
User needs and requirements are one of the fundamental areas where we can engage the VIVO community in order to better understand how the VIVO software can evolve to provide improved services to its users. We propose a panel highlighting examples of obtaining researcher and faculty feedback on VIVO using both informal and formal user-centered methods. We will share what we have done to gather input from users, how this work has informed or could inform our design and development for our individual VIVO instances, and how our different use cases and user stories could help inform future directions for VIVO. We will encourage discussion about what others in the VIVO community or related research information management communities see as potentially important directions for design and development. EarthCollab is a National Science Foundation EarthCube grant-funded project which seeks to use VIVO to model and interlink information about GeoScience projects, data, and contributors. Feedback and evaluations from researchers and faculty on VIVO instances developed as part of this project were obtained through multiple methods, including an in-person one-day workshop, a survey, usability testing, a focus group, and task-oriented user sessions and interviews with scientific stakeholders. This set of feedback helped inform the design of the project’s two VIVO instances and shed light on some of the ontological needs for the project. We will provide a summary of approaches and results of our work to date. The German National Library of Science and Technology (TIB) has used multiple methods to research user needs in research information systems. We conducted a study about user behavior with regard to research information systems using semi-structured interviews with 30 scientists from both universities and non-university research institutions. Additionally, we conducted a survey, which included CRIS related questions, of 1,460 researchers about the information and publication behavior of researchers in science and technology. Furthermore, the TIB hosted the 2nd VIVO workshop with 40 participants from various German-speaking universities and institutions. Part of this workshop was devoted to having participants discuss and prioritize future work in VIVO development. The report from this workshop states that “Improved functionalities for research reporting are heavily desired here.” We will discuss the overall results from this in addition to other feedback and impressions we have received from institutions in Germany who are exploring the use of VIVO. The Scholars@Duke team holds quarterly (previously monthly) user group meetings where they engage with the Scholars and Elements user communities. Additionally, the team holds annual faculty focus group lunches to solicit faculty feedback, conducts usability testing with faculty and power users, engages in multiple meetings with faculty members and researchers one-on-one, and regularly demonstrates the Scholars application to researchers and faculty. The Scholars team also leverages user metrics from Google Analytics and Tableau to augment end user feedback to further inform our design decisions. As part of this panel, we will discuss the multiple approaches used to gather feedback and how this feedback, and user metrics, informed design or spurred development directions.
Translation of an ontology into an application’s display and editing interactions requires going beyond what the ontology expresses to consider how information modeled ontologically will make sense to end-users. For instance, within the VIVO application, the interface has the ability to enable different display or editing interactions based on the context within which a given ontology property appears. VIVO and Vitro utilize several methods of application-level configuration to enable encoding these interaction-specific decisions separately from the ontology. Metadata application profiles are software-independent specifications for defining expectations for how the metadata or an ontology should be used in certain contexts. These application profiles encode information that parallels some of the aspects of Vitro or VIVO application-level configurations. In this presentation, we will discuss the process and the results of the work we did for converting metadata application profiles specified in the Shapes Constraint Language (SHACL) into the VitroLib prototype, which extends Vitro. We will discuss how the translations appear in the interface and the opportunities and challenges around using application level configurations in this manner. We developed the VitroLib prototype tool to explore how to enable library catalogers to create and edit linked data to describe library resources such as books and recorded music. This prototype development was part of the Mellon Foundation-funded Linked Data For Libraries Labs (LD4L-Labs) and Linked Data For Production (LD4P) projects, which explore aspects of transitioning library services to the use of linked open data. LD4L-Labs and LD4P participants, including music catalogers, used SHACL to define metadata application profiles for Audio works. The SHACL standard makes it possible to formally specify how ontology properties are expected to behave, and has uses for both validating data and for designing forms for entering data. In addition to discussing how we converted SHACL into VitroLib, we will also touch on custom form streamlining and how this work intersects with the SHACL translations. Using a software-independent method to specify form and user interface interactions has several benefits: (a) catalogers and metadata experts can specify their interaction expectations for Vitro-based tools such as VitroLib or VIVO without the expertise of a software expert, (b) communities who develop these interaction expectations can easily share them within and beyond the VIVO community, and (c) easing the translation process to application configuration can support further adoption of these profiles within the VIVO and Vitro communities. As VitroLib relies on Vitro in an analogous way to VIVO, the work we have done to customize and extend Vitro to fit cataloging needs in the bibliographic metadata domain has potentially direct implications or correlations with VIVO architectural needs and potential evolution.
In the context of VIVO, “the value of making scholarly data open, found, and consumed” is predicated on a solid foundation of vibrant and sustainable core software, as well as a community aligned on strategic initiatives. This session will discuss specific ways that the community is evolving in order to be more inclusive, more effective, and more focused as we move towards a revitalized VIVO platform.
Huda Khan , Cornell University
User needs and requirements are one of the fundamental areas where we can engage the VIVO community in order to better understand how the VIVO software can evolve to provide improved services to its users. We propose a panel highlighting examples of obtaining researcher and faculty feedback on VIVO using both informal and formal user-centered methods. We will share what we have done to gather input from users, how this work has informed or could inform our design and development for our individual VIVO instances, and how our different use cases and user stories could help inform future directions for VIVO. We will encourage discussion about what others in the VIVO community or related research information management communities see as potentially important directions for design and development. EarthCollab is a National Science Foundation EarthCube grant-funded project which seeks to use VIVO to model and interlink information about GeoScience projects, data, and contributors. Feedback and evaluations from researchers and faculty on VIVO instances developed as part of this project were obtained through multiple methods, including an in-person one-day workshop, a survey, usability testing, a focus group, and task-oriented user sessions and interviews with scientific stakeholders. This set of feedback helped inform the design of the project’s two VIVO instances and shed light on some of the ontological needs for the project. We will provide a summary of approaches and results of our work to date. The German National Library of Science and Technology (TIB) has used multiple methods to research user needs in research information systems. We conducted a study about user behavior with regard to research information systems using semi-structured interviews with 30 scientists from both universities and non-university research institutions. Additionally, we conducted a survey, which included CRIS related questions, of 1,460 researchers about the information and publication behavior of researchers in science and technology. Furthermore, the TIB hosted the 2nd VIVO workshop with 40 participants from various German-speaking universities and institutions. Part of this workshop was devoted to having participants discuss and prioritize future work in VIVO development. The report from this workshop states that “Improved functionalities for research reporting are heavily desired here.” We will discuss the overall results from this in addition to other feedback and impressions we have received from institutions in Germany who are exploring the use of VIVO. The Scholars@Duke team holds quarterly (previously monthly) user group meetings where they engage with the Scholars and Elements user communities. Additionally, the team holds annual faculty focus group lunches to solicit faculty feedback, conducts usability testing with faculty and power users, engages in multiple meetings with faculty members and researchers one-on-one, and regularly demonstrates the Scholars application to researchers and faculty. The Scholars team also leverages user metrics from Google Analytics and Tableau to augment end user feedback to further inform our design decisions. As part of this panel, we will discuss the multiple approaches used to gather feedback and how this feedback, and user metrics, informed design or spurred development directions.
Erica Johns , Cornell University
Translation of an ontology into an application’s display and editing interactions requires going beyond what the ontology expresses to consider how information modeled ontologically will make sense to end-users. For instance, within the VIVO application, the interface has the ability to enable different display or editing interactions based on the context within which a given ontology property appears. VIVO and Vitro utilize several methods of application-level configuration to enable encoding these interaction-specific decisions separately from the ontology. Metadata application profiles are software-independent specifications for defining expectations for how the metadata or an ontology should be used in certain contexts. These application profiles encode information that parallels some of the aspects of Vitro or VIVO application-level configurations. In this presentation, we will discuss the process and the results of the work we did for converting metadata application profiles specified in the Shapes Constraint Language (SHACL) into the VitroLib prototype, which extends Vitro. We will discuss how the translations appear in the interface and the opportunities and challenges around using application level configurations in this manner. We developed the VitroLib prototype tool to explore how to enable library catalogers to create and edit linked data to describe library resources such as books and recorded music. This prototype development was part of the Mellon Foundation-funded Linked Data For Libraries Labs (LD4L-Labs) and Linked Data For Production (LD4P) projects, which explore aspects of transitioning library services to the use of linked open data. LD4L-Labs and LD4P participants, including music catalogers, used SHACL to define metadata application profiles for Audio works. The SHACL standard makes it possible to formally specify how ontology properties are expected to behave, and has uses for both validating data and for designing forms for entering data. In addition to discussing how we converted SHACL into VitroLib, we will also touch on custom form streamlining and how this work intersects with the SHACL translations. Using a software-independent method to specify form and user interface interactions has several benefits: (a) catalogers and metadata experts can specify their interaction expectations for Vitro-based tools such as VitroLib or VIVO without the expertise of a software expert, (b) communities who develop these interaction expectations can easily share them within and beyond the VIVO community, and (c) easing the translation process to application configuration can support further adoption of these profiles within the VIVO and Vitro communities. As VitroLib relies on Vitro in an analogous way to VIVO, the work we have done to customize and extend Vitro to fit cataloging needs in the bibliographic metadata domain has potentially direct implications or correlations with VIVO architectural needs and potential evolution.
Steven Folsom , Cornell University
Translation of an ontology into an application’s display and editing interactions requires going beyond what the ontology expresses to consider how information modeled ontologically will make sense to end-users. For instance, within the VIVO application, the interface has the ability to enable different display or editing interactions based on the context within which a given ontology property appears. VIVO and Vitro utilize several methods of application-level configuration to enable encoding these interaction-specific decisions separately from the ontology. Metadata application profiles are software-independent specifications for defining expectations for how the metadata or an ontology should be used in certain contexts. These application profiles encode information that parallels some of the aspects of Vitro or VIVO application-level configurations. In this presentation, we will discuss the process and the results of the work we did for converting metadata application profiles specified in the Shapes Constraint Language (SHACL) into the VitroLib prototype, which extends Vitro. We will discuss how the translations appear in the interface and the opportunities and challenges around using application level configurations in this manner. We developed the VitroLib prototype tool to explore how to enable library catalogers to create and edit linked data to describe library resources such as books and recorded music. This prototype development was part of the Mellon Foundation-funded Linked Data For Libraries Labs (LD4L-Labs) and Linked Data For Production (LD4P) projects, which explore aspects of transitioning library services to the use of linked open data. LD4L-Labs and LD4P participants, including music catalogers, used SHACL to define metadata application profiles for Audio works. The SHACL standard makes it possible to formally specify how ontology properties are expected to behave, and has uses for both validating data and for designing forms for entering data. In addition to discussing how we converted SHACL into VitroLib, we will also touch on custom form streamlining and how this work intersects with the SHACL translations. Using a software-independent method to specify form and user interface interactions has several benefits: (a) catalogers and metadata experts can specify their interaction expectations for Vitro-based tools such as VitroLib or VIVO without the expertise of a software expert, (b) communities who develop these interaction expectations can easily share them within and beyond the VIVO community, and (c) easing the translation process to application configuration can support further adoption of these profiles within the VIVO and Vitro communities. As VitroLib relies on Vitro in an analogous way to VIVO, the work we have done to customize and extend Vitro to fit cataloging needs in the bibliographic metadata domain has potentially direct implications or correlations with VIVO architectural needs and potential evolution.
Jason Kovari , Cornell University
Translation of an ontology into an application’s display and editing interactions requires going beyond what the ontology expresses to consider how information modeled ontologically will make sense to end-users. For instance, within the VIVO application, the interface has the ability to enable different display or editing interactions based on the context within which a given ontology property appears. VIVO and Vitro utilize several methods of application-level configuration to enable encoding these interaction-specific decisions separately from the ontology. Metadata application profiles are software-independent specifications for defining expectations for how the metadata or an ontology should be used in certain contexts. These application profiles encode information that parallels some of the aspects of Vitro or VIVO application-level configurations. In this presentation, we will discuss the process and the results of the work we did for converting metadata application profiles specified in the Shapes Constraint Language (SHACL) into the VitroLib prototype, which extends Vitro. We will discuss how the translations appear in the interface and the opportunities and challenges around using application level configurations in this manner. We developed the VitroLib prototype tool to explore how to enable library catalogers to create and edit linked data to describe library resources such as books and recorded music. This prototype development was part of the Mellon Foundation-funded Linked Data For Libraries Labs (LD4L-Labs) and Linked Data For Production (LD4P) projects, which explore aspects of transitioning library services to the use of linked open data. LD4L-Labs and LD4P participants, including music catalogers, used SHACL to define metadata application profiles for Audio works. The SHACL standard makes it possible to formally specify how ontology properties are expected to behave, and has uses for both validating data and for designing forms for entering data. In addition to discussing how we converted SHACL into VitroLib, we will also touch on custom form streamlining and how this work intersects with the SHACL translations. Using a software-independent method to specify form and user interface interactions has several benefits: (a) catalogers and metadata experts can specify their interaction expectations for Vitro-based tools such as VitroLib or VIVO without the expertise of a software expert, (b) communities who develop these interaction expectations can easily share them within and beyond the VIVO community, and (c) easing the translation process to application configuration can support further adoption of these profiles within the VIVO and Vitro communities. As VitroLib relies on Vitro in an analogous way to VIVO, the work we have done to customize and extend Vitro to fit cataloging needs in the bibliographic metadata domain has potentially direct implications or correlations with VIVO architectural needs and potential evolution.
Dean Krafft , Cornell University
Translation of an ontology into an application’s display and editing interactions requires going beyond what the ontology expresses to consider how information modeled ontologically will make sense to end-users. For instance, within the VIVO application, the interface has the ability to enable different display or editing interactions based on the context within which a given ontology property appears. VIVO and Vitro utilize several methods of application-level configuration to enable encoding these interaction-specific decisions separately from the ontology. Metadata application profiles are software-independent specifications for defining expectations for how the metadata or an ontology should be used in certain contexts. These application profiles encode information that parallels some of the aspects of Vitro or VIVO application-level configurations. In this presentation, we will discuss the process and the results of the work we did for converting metadata application profiles specified in the Shapes Constraint Language (SHACL) into the VitroLib prototype, which extends Vitro. We will discuss how the translations appear in the interface and the opportunities and challenges around using application level configurations in this manner. We developed the VitroLib prototype tool to explore how to enable library catalogers to create and edit linked data to describe library resources such as books and recorded music. This prototype development was part of the Mellon Foundation-funded Linked Data For Libraries Labs (LD4L-Labs) and Linked Data For Production (LD4P) projects, which explore aspects of transitioning library services to the use of linked open data. LD4L-Labs and LD4P participants, including music catalogers, used SHACL to define metadata application profiles for Audio works. The SHACL standard makes it possible to formally specify how ontology properties are expected to behave, and has uses for both validating data and for designing forms for entering data. In addition to discussing how we converted SHACL into VitroLib, we will also touch on custom form streamlining and how this work intersects with the SHACL translations. Using a software-independent method to specify form and user interface interactions has several benefits: (a) catalogers and metadata experts can specify their interaction expectations for Vitro-based tools such as VitroLib or VIVO without the expertise of a software expert, (b) communities who develop these interaction expectations can easily share them within and beyond the VIVO community, and (c) easing the translation process to application configuration can support further adoption of these profiles within the VIVO and Vitro communities. As VitroLib relies on Vitro in an analogous way to VIVO, the work we have done to customize and extend Vitro to fit cataloging needs in the bibliographic metadata domain has potentially direct implications or correlations with VIVO architectural needs and potential evolution.
Simeon Warner , Cornell University
Translation of an ontology into an application’s display and editing interactions requires going beyond what the ontology expresses to consider how information modeled ontologically will make sense to end-users. For instance, within the VIVO application, the interface has the ability to enable different display or editing interactions based on the context within which a given ontology property appears. VIVO and Vitro utilize several methods of application-level configuration to enable encoding these interaction-specific decisions separately from the ontology. Metadata application profiles are software-independent specifications for defining expectations for how the metadata or an ontology should be used in certain contexts. These application profiles encode information that parallels some of the aspects of Vitro or VIVO application-level configurations. In this presentation, we will discuss the process and the results of the work we did for converting metadata application profiles specified in the Shapes Constraint Language (SHACL) into the VitroLib prototype, which extends Vitro. We will discuss how the translations appear in the interface and the opportunities and challenges around using application level configurations in this manner. We developed the VitroLib prototype tool to explore how to enable library catalogers to create and edit linked data to describe library resources such as books and recorded music. This prototype development was part of the Mellon Foundation-funded Linked Data For Libraries Labs (LD4L-Labs) and Linked Data For Production (LD4P) projects, which explore aspects of transitioning library services to the use of linked open data. LD4L-Labs and LD4P participants, including music catalogers, used SHACL to define metadata application profiles for Audio works. The SHACL standard makes it possible to formally specify how ontology properties are expected to behave, and has uses for both validating data and for designing forms for entering data. In addition to discussing how we converted SHACL into VitroLib, we will also touch on custom form streamlining and how this work intersects with the SHACL translations. Using a software-independent method to specify form and user interface interactions has several benefits: (a) catalogers and metadata experts can specify their interaction expectations for Vitro-based tools such as VitroLib or VIVO without the expertise of a software expert, (b) communities who develop these interaction expectations can easily share them within and beyond the VIVO community, and (c) easing the translation process to application configuration can support further adoption of these profiles within the VIVO and Vitro communities. As VitroLib relies on Vitro in an analogous way to VIVO, the work we have done to customize and extend Vitro to fit cataloging needs in the bibliographic metadata domain has potentially direct implications or correlations with VIVO architectural needs and potential evolution.
Michelle Futornick , Cornell University
Following similar programs in Europe, as part of the National Innovation and Science Agenda launched in 2015, the Australian Government included a measure to introduce a national research engagement and impact (REI) assessment. This assessment examines how Australian universities are translating their research into economic, social and other benefits and encourage greater collaboration between universities, industries and other end-users of research. A pilot program was run in 2017, with a national roll-out in 2018. Up to this point, researchers have not normally collected the type of data required to support the REI assessment reporting – case studies written in accessible, easily understood language that highlight the real-word impact of the research, with details on engagement with collaborators and beneficiaries, along with quantification of the impact/engagement and evidence to support the overall story. While various software solutions are now available to assist in this data collection, management and submission process, UOW wanted to utilize resources already available and familiar to the UOW research community as part of the overall REI initiative. As such, we undertook a project to extend UOW Scholars (the UOW VIVO implementation) to provide: * an intuitive interface for researchers to create and share Impact Stories (mini case studies) directly within their existing Scholars’ profiles * reporting functionality for the REI team to discover new impact stories from UOW Scholars that had potential to become the basis of full case studies for the REI assessment submission * showcase the impact stories directly within the researcher profile to UOW Scholars’ visitors The project involved development work to integrate the new data points within VIVO and a suitable user interface to allow creation and maintenance of the impact stories. The process was supported by a promotional competition – entrants were required to update their profile and include at least one impact story. We extended our in-house VIVO data manager tool (developed to perform nightly updates from our publication management system and provide reporting capabilities on the VIVO data) to provide high level reports for the REI team. Overall, the project was a success – UOW Scholars now includes 61 impact stories from 55 researchers across 33 fields of research. The integration of impact stories within UOW Scholars has helped support the overall REI initiative – allowing researchers to understand and participate in the process in a tangible manner. Going forward, we plan to better promote the impact stories and examine the data for potential visualisations and reporting opportunities. This talk will discuss: * the process of extending VIVO (ontology, API and user interface development) to cap Impact Stories * the development of Impact Stories and their structure * development of reporting capabilities within the in-house VIVO data manager application * user support and promotional activities
Keith Brophy , University of Wollongong
Much of the VIVO code base is understood only by a few original core developers. For VIVO to grow and thrive, we need new developers with new energy and new ideas. But where will they come from? The goal of this workshop is to provide developers with a practical, hands-on introduction to selected VIVO development approaches and debugging methods. We also hope to highlight the process for contributing back to the VIVO core code. The workshop is intended to encourage further developer engagement in the VIVO community and to help begin training developers and build a broader base of VIVO committers. Attendees will be provided a Virtual Machine with VIVO and the Eclipse IDE editor prior to the workshop and will be led through the workshop by VIVO developers
Publons (www.publons.com) is a rapidly growing online community where scholars track their peer review activity. More than 320,000 researchers have joined the site since its launch in 2012 in order to capture and highlight peer reviews, which have been traditionally undervalued when assessing the full breadth of scholarly activity. In this session, we will present on integrating Publons data into VIVO two different ways: as a javascript widget and via data ingest using the Publons’ API and Python scripts. The Publons widget, currently in a limited release as a demo, allows a VIVO site to easily embed Publons data via a line of javascript. Enabling the widget requires the addition of a data property to the site’s local ontology to capture Publons IDs and editing a single Freemarker template. Once enabled, the widget will appear on the page for any person with a Publons ID. The widget shows a visual representation of the person’s recent peer review activity. The widget also links to the full Publons’ profile. A more comprehensive integration of Publons data can also be achieved using the Publons’ API, Python scripts, and ontology extensions. A full ingest of Publons data enables peer review activity to be integrated throughout VIVO, i.e. on both a person’s page, on journal pages, and (when made available by publishers) on individual article records. Publons was acquired by Clarivate in 2017, joining a portfolio of services that includes Web of Science and InCites. Instructions and code for the above are available at https://www.github.com/Clarivate-SAR.
User needs and requirements are one of the fundamental areas where we can engage the VIVO community in order to better understand how the VIVO software can evolve to provide improved services to its users. We propose a panel highlighting examples of obtaining researcher and faculty feedback on VIVO using both informal and formal user-centered methods. We will share what we have done to gather input from users, how this work has informed or could inform our design and development for our individual VIVO instances, and how our different use cases and user stories could help inform future directions for VIVO. We will encourage discussion about what others in the VIVO community or related research information management communities see as potentially important directions for design and development. EarthCollab is a National Science Foundation EarthCube grant-funded project which seeks to use VIVO to model and interlink information about GeoScience projects, data, and contributors. Feedback and evaluations from researchers and faculty on VIVO instances developed as part of this project were obtained through multiple methods, including an in-person one-day workshop, a survey, usability testing, a focus group, and task-oriented user sessions and interviews with scientific stakeholders. This set of feedback helped inform the design of the project’s two VIVO instances and shed light on some of the ontological needs for the project. We will provide a summary of approaches and results of our work to date. The German National Library of Science and Technology (TIB) has used multiple methods to research user needs in research information systems. We conducted a study about user behavior with regard to research information systems using semi-structured interviews with 30 scientists from both universities and non-university research institutions. Additionally, we conducted a survey, which included CRIS related questions, of 1,460 researchers about the information and publication behavior of researchers in science and technology. Furthermore, the TIB hosted the 2nd VIVO workshop with 40 participants from various German-speaking universities and institutions. Part of this workshop was devoted to having participants discuss and prioritize future work in VIVO development. The report from this workshop states that “Improved functionalities for research reporting are heavily desired here.” We will discuss the overall results from this in addition to other feedback and impressions we have received from institutions in Germany who are exploring the use of VIVO. The Scholars@Duke team holds quarterly (previously monthly) user group meetings where they engage with the Scholars and Elements user communities. Additionally, the team holds annual faculty focus group lunches to solicit faculty feedback, conducts usability testing with faculty and power users, engages in multiple meetings with faculty members and researchers one-on-one, and regularly demonstrates the Scholars application to researchers and faculty. The Scholars team also leverages user metrics from Google Analytics and Tableau to augment end user feedback to further inform our design decisions. As part of this panel, we will discuss the multiple approaches used to gather feedback and how this feedback, and user metrics, informed design or spurred development directions.
The rising interest in advanced research analytics at university leadership level calls for innovative solutions to keep up with demand. The Technical University of Denmark has started the development of a VIVO-based Research Analytics Platform (VIVO RAP) and has recently released the first two modules of the service. VIVO RAP imports data from the Web of Science (WoS) and draws on the InCites API. It runs as an internal university service, but the software is available as open source on GitHub and may be used and adapted by anyone. The VIVO RAP – to be launched in April 2018 – initially features a university collaboration and a university publication module. Reports analyzing the university’s global collaboration support the university leadership in understanding the overall collaboration landscape and the nature and impact of the individual collaborations – at university as well as department levels. This may aid in identifying existing collaborations to be strengthened or new ones to be initiated – and thus ultimately strengthen the research. For all university affiliated publications, complete metadata is imported from WoS and converted to linked VIVO data with the necessary ontology extensions. While WoS features good control of organizational entities at the university level (Organization Enhanced), this is not the case at the university department or other sub-organizational levels. As precision at this level is essential for the VIVO RAP analytics, a mapping method was developed to automatically assign publications to the right department – a local “Department Enhanced”. The presentation will: • Review the motivation for the project, the “before” situation, the chosen architecture and its main components, the development and testing approaches • Demonstrate the resulting services of the version 1 release – with primary focus on the collaboration analytics module • Review the plans for the coming version 2 release, later this year • Review the WoS/InCites API, functionality, technical specs and mission in the Clarivate portfolio • Stipulate the plans for the coming years, including new modules and functionalities, new data sets and types, new development partnerships within the VIVO community.
Benjamin Gross , Clarivate Analytics
The rising interest in advanced research analytics at university leadership level calls for innovative solutions to keep up with demand. The Technical University of Denmark has started the development of a VIVO-based Research Analytics Platform (VIVO RAP) and has recently released the first two modules of the service. VIVO RAP imports data from the Web of Science (WoS) and draws on the InCites API. It runs as an internal university service, but the software is available as open source on GitHub and may be used and adapted by anyone. The VIVO RAP – to be launched in April 2018 – initially features a university collaboration and a university publication module. Reports analyzing the university’s global collaboration support the university leadership in understanding the overall collaboration landscape and the nature and impact of the individual collaborations – at university as well as department levels. This may aid in identifying existing collaborations to be strengthened or new ones to be initiated – and thus ultimately strengthen the research. For all university affiliated publications, complete metadata is imported from WoS and converted to linked VIVO data with the necessary ontology extensions. While WoS features good control of organizational entities at the university level (Organization Enhanced), this is not the case at the university department or other sub-organizational levels. As precision at this level is essential for the VIVO RAP analytics, a mapping method was developed to automatically assign publications to the right department – a local “Department Enhanced”. The presentation will: • Review the motivation for the project, the “before” situation, the chosen architecture and its main components, the development and testing approaches • Demonstrate the resulting services of the version 1 release – with primary focus on the collaboration analytics module • Review the plans for the coming version 2 release, later this year • Review the WoS/InCites API, functionality, technical specs and mission in the Clarivate portfolio • Stipulate the plans for the coming years, including new modules and functionalities, new data sets and types, new development partnerships within the VIVO community.
Rob Pritchett , Clarivate Analytics
The integrity of ‘the scholarly record’ is predicated on the accuracy and availability of the information describing that record. The value of this information (high quality metadata) is often underestimated by those who stand to benefit most from its use and reuse---publishers and authors. This moderated panel session will bring multiple perspectives together to discuss the scholarly record broadly defined, the value of the relationships among different kinds of research objects, and the extent to which workarounds, ingenuity and a possibly unreasonable degree of discussion are required to address and remediate the shortfalls in how metadata is created and distributed. Topics addressed will include the communication surrounding the value of rich, interlinked metadata; the necessity of variety in schema, standards and workflows,and the tradeoffs inherent in aggregating and distributing open data.
The integrity of ‘the scholarly record’ is predicated on the accuracy and availability of the information describing that record. The value of this information (high quality metadata) is often underestimated by those who stand to benefit most from its use and reuse---publishers and authors. This moderated panel session will bring multiple perspectives together to discuss the scholarly record broadly defined, the value of the relationships among different kinds of research objects, and the extent to which workarounds, ingenuity and a possibly unreasonable degree of discussion are required to address and remediate the shortfalls in how metadata is created and distributed. Topics addressed will include the communication surrounding the value of rich, interlinked metadata; the necessity of variety in schema, standards and workflows,and the tradeoffs inherent in aggregating and distributing open data.
Jennifer Kemp , Crossref
The integrity of ‘the scholarly record’ is predicated on the accuracy and availability of the information describing that record. The value of this information (high quality metadata) is often underestimated by those who stand to benefit most from its use and reuse---publishers and authors. This moderated panel session will bring multiple perspectives together to discuss the scholarly record broadly defined, the value of the relationships among different kinds of research objects, and the extent to which workarounds, ingenuity and a possibly unreasonable degree of discussion are required to address and remediate the shortfalls in how metadata is created and distributed. Topics addressed will include the communication surrounding the value of rich, interlinked metadata; the necessity of variety in schema, standards and workflows,and the tradeoffs inherent in aggregating and distributing open data.
The integrity of ‘the scholarly record’ is predicated on the accuracy and availability of the information describing that record. The value of this information (high quality metadata) is often underestimated by those who stand to benefit most from its use and reuse---publishers and authors. This moderated panel session will bring multiple perspectives together to discuss the scholarly record broadly defined, the value of the relationships among different kinds of research objects, and the extent to which workarounds, ingenuity and a possibly unreasonable degree of discussion are required to address and remediate the shortfalls in how metadata is created and distributed. Topics addressed will include the communication surrounding the value of rich, interlinked metadata; the necessity of variety in schema, standards and workflows,and the tradeoffs inherent in aggregating and distributing open data.
Judy Ruttenberg , Association of Research Libraries
The integrity of ‘the scholarly record’ is predicated on the accuracy and availability of the information describing that record. The value of this information (high quality metadata) is often underestimated by those who stand to benefit most from its use and reuse---publishers and authors. This moderated panel session will bring multiple perspectives together to discuss the scholarly record broadly defined, the value of the relationships among different kinds of research objects, and the extent to which workarounds, ingenuity and a possibly unreasonable degree of discussion are required to address and remediate the shortfalls in how metadata is created and distributed. Topics addressed will include the communication surrounding the value of rich, interlinked metadata; the necessity of variety in schema, standards and workflows,and the tradeoffs inherent in aggregating and distributing open data.
The integrity of ‘the scholarly record’ is predicated on the accuracy and availability of the information describing that record. The value of this information (high quality metadata) is often underestimated by those who stand to benefit most from its use and reuse---publishers and authors. This moderated panel session will bring multiple perspectives together to discuss the scholarly record broadly defined, the value of the relationships among different kinds of research objects, and the extent to which workarounds, ingenuity and a possibly unreasonable degree of discussion are required to address and remediate the shortfalls in how metadata is created and distributed. Topics addressed will include the communication surrounding the value of rich, interlinked metadata; the necessity of variety in schema, standards and workflows,and the tradeoffs inherent in aggregating and distributing open data.
Todd Vision , The University of North Carolina at Chapel Hill
The integrity of ‘the scholarly record’ is predicated on the accuracy and availability of the information describing that record. The value of this information (high quality metadata) is often underestimated by those who stand to benefit most from its use and reuse---publishers and authors. This moderated panel session will bring multiple perspectives together to discuss the scholarly record broadly defined, the value of the relationships among different kinds of research objects, and the extent to which workarounds, ingenuity and a possibly unreasonable degree of discussion are required to address and remediate the shortfalls in how metadata is created and distributed. Topics addressed will include the communication surrounding the value of rich, interlinked metadata; the necessity of variety in schema, standards and workflows,and the tradeoffs inherent in aggregating and distributing open data.
The integrity of ‘the scholarly record’ is predicated on the accuracy and availability of the information describing that record. The value of this information (high quality metadata) is often underestimated by those who stand to benefit most from its use and reuse---publishers and authors. This moderated panel session will bring multiple perspectives together to discuss the scholarly record broadly defined, the value of the relationships among different kinds of research objects, and the extent to which workarounds, ingenuity and a possibly unreasonable degree of discussion are required to address and remediate the shortfalls in how metadata is created and distributed. Topics addressed will include the communication surrounding the value of rich, interlinked metadata; the necessity of variety in schema, standards and workflows,and the tradeoffs inherent in aggregating and distributing open data.
Clare Dean , Metadata 2020
Profiles Research Networking Software (RNS) is a free open source semantic web application which uses the VIVO ontology to generate searchable online profiles of an organization’s investigators (http://profiles.catalyst.harvard.edu). This poster describes a new feature called Group Profiles, which creates separate pages in the Profiles RNS website for centers, laboratories, projects, or other groups of people. This allows a group to share information about itself on the Profiles RNS website and link to the profile pages of its members. Group Profiles was developed as a collaboration between the Harvard Profiles RNS team and Christopher Shanahan and Chris Dorney from the Boston University Clinical Translational Science Institute (CTSI). Phase 1 of the Group Profiles project includes the following functionality: 1) Membership. Managers of Group Profile pages can affiliate individuals within the Profiles RNS website as members of the group. The list of members are shown on the Group Profile page; and, the groups a person is affiliated with are shown on his or her profile page in a My Groups module. 2) Custom Page Layout. Within the Profiles RNS framework, Group Profiles are a distinct object class, which means they can have different content modules and page layout than person profile pages. 3) Custom Modules. Certain Group Profile modules enable authorized group managers to post content directly to the Group Profile pages. Other modules pull content from member profile pages to display on the Group Profile page. The current list of modules include: (a) Photo (group logo, picture, etc.); (b) Welcome, About Us, and Contact text; (c) Open Social Gadgets (e.g., links to other groups or external websites); and (d) Publications. 4) Security. Group Profiles utilize the Profiles RNS security model, which enables group managers to control whether content is visible to the public or restricted to certain users. 5) Search. Group Profile page content is searchable through the website and the Profiles RNS APIs. 6) Open Data. Group Profile page content is integrated into the Profiles RNS semantic web architecture so that it can be exchanged with other applications (like VIVO) via linked open data. We plan to expand the functionality of Group Profiles in the future and hope that other institutions also contribute new modules.
Nick Brown , Harvard Medical School
Profiles Research Networking Software (RNS) is a free open source semantic web application which uses the VIVO ontology to generate searchable online profiles of an organization’s investigators (http://profiles.catalyst.harvard.edu). This poster describes a new feature called Group Profiles, which creates separate pages in the Profiles RNS website for centers, laboratories, projects, or other groups of people. This allows a group to share information about itself on the Profiles RNS website and link to the profile pages of its members. Group Profiles was developed as a collaboration between the Harvard Profiles RNS team and Christopher Shanahan and Chris Dorney from the Boston University Clinical Translational Science Institute (CTSI). Phase 1 of the Group Profiles project includes the following functionality: 1) Membership. Managers of Group Profile pages can affiliate individuals within the Profiles RNS website as members of the group. The list of members are shown on the Group Profile page; and, the groups a person is affiliated with are shown on his or her profile page in a My Groups module. 2) Custom Page Layout. Within the Profiles RNS framework, Group Profiles are a distinct object class, which means they can have different content modules and page layout than person profile pages. 3) Custom Modules. Certain Group Profile modules enable authorized group managers to post content directly to the Group Profile pages. Other modules pull content from member profile pages to display on the Group Profile page. The current list of modules include: (a) Photo (group logo, picture, etc.); (b) Welcome, About Us, and Contact text; (c) Open Social Gadgets (e.g., links to other groups or external websites); and (d) Publications. 4) Security. Group Profiles utilize the Profiles RNS security model, which enables group managers to control whether content is visible to the public or restricted to certain users. 5) Search. Group Profile page content is searchable through the website and the Profiles RNS APIs. 6) Open Data. Group Profile page content is integrated into the Profiles RNS semantic web architecture so that it can be exchanged with other applications (like VIVO) via linked open data. We plan to expand the functionality of Group Profiles in the future and hope that other institutions also contribute new modules.
Christopher Shanahan , Boston University
Profiles Research Networking Software (RNS) is a free open source semantic web application which uses the VIVO ontology to generate searchable online profiles of an organization’s investigators (http://profiles.catalyst.harvard.edu). This poster describes a new feature called Group Profiles, which creates separate pages in the Profiles RNS website for centers, laboratories, projects, or other groups of people. This allows a group to share information about itself on the Profiles RNS website and link to the profile pages of its members. Group Profiles was developed as a collaboration between the Harvard Profiles RNS team and Christopher Shanahan and Chris Dorney from the Boston University Clinical Translational Science Institute (CTSI). Phase 1 of the Group Profiles project includes the following functionality: 1) Membership. Managers of Group Profile pages can affiliate individuals within the Profiles RNS website as members of the group. The list of members are shown on the Group Profile page; and, the groups a person is affiliated with are shown on his or her profile page in a My Groups module. 2) Custom Page Layout. Within the Profiles RNS framework, Group Profiles are a distinct object class, which means they can have different content modules and page layout than person profile pages. 3) Custom Modules. Certain Group Profile modules enable authorized group managers to post content directly to the Group Profile pages. Other modules pull content from member profile pages to display on the Group Profile page. The current list of modules include: (a) Photo (group logo, picture, etc.); (b) Welcome, About Us, and Contact text; (c) Open Social Gadgets (e.g., links to other groups or external websites); and (d) Publications. 4) Security. Group Profiles utilize the Profiles RNS security model, which enables group managers to control whether content is visible to the public or restricted to certain users. 5) Search. Group Profile page content is searchable through the website and the Profiles RNS APIs. 6) Open Data. Group Profile page content is integrated into the Profiles RNS semantic web architecture so that it can be exchanged with other applications (like VIVO) via linked open data. We plan to expand the functionality of Group Profiles in the future and hope that other institutions also contribute new modules.
Christopher Dorney , Boston University
Profiles Research Networking Software (RNS) is a free open source semantic web application which uses the VIVO ontology to generate searchable online profiles of an organization’s investigators (http://profiles.catalyst.harvard.edu). This poster describes a new feature called Group Profiles, which creates separate pages in the Profiles RNS website for centers, laboratories, projects, or other groups of people. This allows a group to share information about itself on the Profiles RNS website and link to the profile pages of its members. Group Profiles was developed as a collaboration between the Harvard Profiles RNS team and Christopher Shanahan and Chris Dorney from the Boston University Clinical Translational Science Institute (CTSI). Phase 1 of the Group Profiles project includes the following functionality: 1) Membership. Managers of Group Profile pages can affiliate individuals within the Profiles RNS website as members of the group. The list of members are shown on the Group Profile page; and, the groups a person is affiliated with are shown on his or her profile page in a My Groups module. 2) Custom Page Layout. Within the Profiles RNS framework, Group Profiles are a distinct object class, which means they can have different content modules and page layout than person profile pages. 3) Custom Modules. Certain Group Profile modules enable authorized group managers to post content directly to the Group Profile pages. Other modules pull content from member profile pages to display on the Group Profile page. The current list of modules include: (a) Photo (group logo, picture, etc.); (b) Welcome, About Us, and Contact text; (c) Open Social Gadgets (e.g., links to other groups or external websites); and (d) Publications. 4) Security. Group Profiles utilize the Profiles RNS security model, which enables group managers to control whether content is visible to the public or restricted to certain users. 5) Search. Group Profile page content is searchable through the website and the Profiles RNS APIs. 6) Open Data. Group Profile page content is integrated into the Profiles RNS semantic web architecture so that it can be exchanged with other applications (like VIVO) via linked open data. We plan to expand the functionality of Group Profiles in the future and hope that other institutions also contribute new modules.
Peter Flynn , Boston University
The Texas A&M University (TAMU) Libraries recently launched the beta version of Scholars@TAMU (https://scholars.library.tamu.edu/), a faculty profile system that showcases TAMU research, supports the creation of interdisciplinary research teams, and allows anyone to search through the range of Texas A&M expertise. The system is based on a member-supported, open-source, semantic-web software (i.e., VIVO). The Libraries’ team has developed a research information ecosystem for publishing data to Scholars@TAMU. The ecosystem broadly consists of three parts: data source, data editor, and public facing layer. This presentation mainly covers the editor part that allows faculty engage with their profiles. In addition, marketing materials of Scholars@TAMU will be shared with the audience. The development team implemented our own profile editor that allows faculty to interact with various data sources. While Symplectic Elements is the primary data source for scholarly publications, the system pulls data from other campus sources as well (e.g., TAMU faculty, HR database, awards, teaching activities, institutional repositories, and ORCID). The team uses the features in the editor to allow faculty to engage with their information in an iterative process between the faculty and the libraries’ scholarly communication staff. The team then uses the editor to tie it all together and publish the data to VIVO. We believe that faculty engagement with their profiles increases quality of the data in Scholars@TAMU. Currently 39% of current profile owners ever accessed and updated their profiles.
Dong Joon Lee , Texas A&M University
The Texas A&M University (TAMU) Libraries recently launched the beta version of Scholars@TAMU (https://scholars.library.tamu.edu/), a faculty profile system that showcases TAMU research, supports the creation of interdisciplinary research teams, and allows anyone to search through the range of Texas A&M expertise. The system is based on a member-supported, open-source, semantic-web software (i.e., VIVO). The Libraries’ team has developed a research information ecosystem for publishing data to Scholars@TAMU. The ecosystem broadly consists of three parts: data source, data editor, and public facing layer. This presentation mainly covers the editor part that allows faculty engage with their profiles. In addition, marketing materials of Scholars@TAMU will be shared with the audience. The development team implemented our own profile editor that allows faculty to interact with various data sources. While Symplectic Elements is the primary data source for scholarly publications, the system pulls data from other campus sources as well (e.g., TAMU faculty, HR database, awards, teaching activities, institutional repositories, and ORCID). The team uses the features in the editor to allow faculty to engage with their information in an iterative process between the faculty and the libraries’ scholarly communication staff. The team then uses the editor to tie it all together and publish the data to VIVO. We believe that faculty engagement with their profiles increases quality of the data in Scholars@TAMU. Currently 39% of current profile owners ever accessed and updated their profiles.
Doug Hahn , Texas A&M University
The Texas A&M University (TAMU) Libraries recently launched the beta version of Scholars@TAMU (https://scholars.library.tamu.edu/), a faculty profile system that showcases TAMU research, supports the creation of interdisciplinary research teams, and allows anyone to search through the range of Texas A&M expertise. The system is based on a member-supported, open-source, semantic-web software (i.e., VIVO). The Libraries’ team has developed a research information ecosystem for publishing data to Scholars@TAMU. The ecosystem broadly consists of three parts: data source, data editor, and public facing layer. This presentation mainly covers the editor part that allows faculty engage with their profiles. In addition, marketing materials of Scholars@TAMU will be shared with the audience. The development team implemented our own profile editor that allows faculty to interact with various data sources. While Symplectic Elements is the primary data source for scholarly publications, the system pulls data from other campus sources as well (e.g., TAMU faculty, HR database, awards, teaching activities, institutional repositories, and ORCID). The team uses the features in the editor to allow faculty to engage with their information in an iterative process between the faculty and the libraries’ scholarly communication staff. The team then uses the editor to tie it all together and publish the data to VIVO. We believe that faculty engagement with their profiles increases quality of the data in Scholars@TAMU. Currently 39% of current profile owners ever accessed and updated their profiles.
Ethel Mejia , Texas A&M University
The Texas A&M University (TAMU) Libraries recently launched the beta version of Scholars@TAMU (https://scholars.library.tamu.edu/), a faculty profile system that showcases TAMU research, supports the creation of interdisciplinary research teams, and allows anyone to search through the range of Texas A&M expertise. The system is based on a member-supported, open-source, semantic-web software (i.e., VIVO). The Libraries’ team has developed a research information ecosystem for publishing data to Scholars@TAMU. The ecosystem broadly consists of three parts: data source, data editor, and public facing layer. This presentation mainly covers the editor part that allows faculty engage with their profiles. In addition, marketing materials of Scholars@TAMU will be shared with the audience. The development team implemented our own profile editor that allows faculty to interact with various data sources. While Symplectic Elements is the primary data source for scholarly publications, the system pulls data from other campus sources as well (e.g., TAMU faculty, HR database, awards, teaching activities, institutional repositories, and ORCID). The team uses the features in the editor to allow faculty to engage with their information in an iterative process between the faculty and the libraries’ scholarly communication staff. The team then uses the editor to tie it all together and publish the data to VIVO. We believe that faculty engagement with their profiles increases quality of the data in Scholars@TAMU. Currently 39% of current profile owners ever accessed and updated their profiles.
Bruce Herbert , Texas A&M University
The Texas A&M University (TAMU) Libraries recently launched the beta version of Scholars@TAMU (https://scholars.library.tamu.edu/), a faculty profile system that showcases TAMU research, supports the creation of interdisciplinary research teams, and allows anyone to search through the range of Texas A&M expertise. The system is based on a member-supported, open-source, semantic-web software (i.e., VIVO). The Libraries’ team has developed a research information ecosystem for publishing data to Scholars@TAMU. The ecosystem broadly consists of three parts: data source, data editor, and public facing layer. This presentation mainly covers the editor part that allows faculty engage with their profiles. In addition, marketing materials of Scholars@TAMU will be shared with the audience. The development team implemented our own profile editor that allows faculty to interact with various data sources. While Symplectic Elements is the primary data source for scholarly publications, the system pulls data from other campus sources as well (e.g., TAMU faculty, HR database, awards, teaching activities, institutional repositories, and ORCID). The team uses the features in the editor to allow faculty to engage with their information in an iterative process between the faculty and the libraries’ scholarly communication staff. The team then uses the editor to tie it all together and publish the data to VIVO. We believe that faculty engagement with their profiles increases quality of the data in Scholars@TAMU. Currently 39% of current profile owners ever accessed and updated their profiles.
Michael Bolton , Texas A&M University
It is not sufficient to translate the labels of the VIVO-ISF ontology to adopt VIVO to the needs of German research institutions. There is a need for an ontology extension that is tailored to the specifics of the German academic landscape, especially with regard to language and (academic) culture. In order to involve as many stakeholders as possible, a collaborative approach to ontology management is needed. Basing the corresponding workflows solely on Git is an option, which is currently chosen by the German VIVO community (VIVO-DE). However, there is a demand for more user-friendly ways to work together. There is a small number of tools which are specifically developed for collaborative ontological work. One of them is VoCol. In this poster we would like to describe possible use cases for VoCol in the VIVO-DE context, challenges we anticipate, and our suggestions how to face them in the future. VoCol has been originally developed at Fraunhofer Institute IAIS in Bonn and from 2018 on will be evolved in collaboration by Fraunhofer IAIS and the Technische Informationsbibliothek (TIB) – German National Library of Science and Technology in Hannover. It is based on Open Source software such as Java libraries and the Jena Fuseki SPARQL server, and requires nothing from the user but a standard Git-based repository. The application serves as a frontend in order to facilitate collaborative ontology development. It supports the Semantic Web standards OWL and SKOS, and features various functionalities such as a Turtle editor, a syntax validation routine, options for an automated documentation, tools for ontology visualization, evolution reports, content negotiation, and a SPARQL endpoint. VoCol is currently being tested at the TIB and by other VIVO-DE community members with the goal to assess its suitability for collaborative editing and management of the KDSF-VIVO-Alignment and VIVO-DE-Extension vocabularies. Despite the aforementioned benefits of VoCol, there is still room for improvement, for example regarding performance and a more intuitive user guidance. Moreover, VoCol could be taken to the next level by adding more functionalities such as more sophisticated structural validation and verification routines, and alignment tools that allow to establish mappings between several vocabularies.
Anna Kasprzik , Technische Informationsbibliothek (TIB) – German National Library of Science and Technology
It is not sufficient to translate the labels of the VIVO-ISF ontology to adopt VIVO to the needs of German research institutions. There is a need for an ontology extension that is tailored to the specifics of the German academic landscape, especially with regard to language and (academic) culture. In order to involve as many stakeholders as possible, a collaborative approach to ontology management is needed. Basing the corresponding workflows solely on Git is an option, which is currently chosen by the German VIVO community (VIVO-DE). However, there is a demand for more user-friendly ways to work together. There is a small number of tools which are specifically developed for collaborative ontological work. One of them is VoCol. In this poster we would like to describe possible use cases for VoCol in the VIVO-DE context, challenges we anticipate, and our suggestions how to face them in the future. VoCol has been originally developed at Fraunhofer Institute IAIS in Bonn and from 2018 on will be evolved in collaboration by Fraunhofer IAIS and the Technische Informationsbibliothek (TIB) – German National Library of Science and Technology in Hannover. It is based on Open Source software such as Java libraries and the Jena Fuseki SPARQL server, and requires nothing from the user but a standard Git-based repository. The application serves as a frontend in order to facilitate collaborative ontology development. It supports the Semantic Web standards OWL and SKOS, and features various functionalities such as a Turtle editor, a syntax validation routine, options for an automated documentation, tools for ontology visualization, evolution reports, content negotiation, and a SPARQL endpoint. VoCol is currently being tested at the TIB and by other VIVO-DE community members with the goal to assess its suitability for collaborative editing and management of the KDSF-VIVO-Alignment and VIVO-DE-Extension vocabularies. Despite the aforementioned benefits of VoCol, there is still room for improvement, for example regarding performance and a more intuitive user guidance. Moreover, VoCol could be taken to the next level by adding more functionalities such as more sophisticated structural validation and verification routines, and alignment tools that allow to establish mappings between several vocabularies.
Due to a number of state and federal regulations and other obligations, publicly funded institutions in Germany have to fulfil a variety of reporting duties. One example of such a regulation is the guideline for transparency in research in the German federal state Lower Saxony. This guideline frames, which information about third party founded research projects has to be made publicly available by the universities in Lower Saxony. Besides the federal bodies, there are also German governmental and European funding agencies like Leibniz Association, European Commission, German Research Foundation (DFG, Deutsche Forschungsgemeinschaft) which demand reports about staff, research activities, infrastructure, and other information. Compliance with certain standards is another key feature for reporting. For example, the German Science Council asks research institutions to collect research information according to defined standardized criteria (Research Core Data Set, KDSF). The TIB transformed the data model of the KDSF to make it usable in VIVO. CERIF is another standard of high importance for German research institutions. The Technische Informationsbibliothek (TIB) – German National Library of Science and Technology has decided to use VIVO for reporting. In the scope of VIVO-KDSF project, an internal VIVO is going to be used for generation of reports in accordance with the KDSF. This poses some technical and ontological challenges to a standard out-of-the-box VIVO. To allow the use of VIVO in such a context, it has to comply to a set of laws, rules and regulations, e.g. regarding privacy, and protection of employees. These require some information to be visible only to specific user groups. Furthermore, to achieve a high quality of research information, there is a need for validating and editing workflows. To establish these workflows, some developmental work on VIVO has to be done. This includes amongst others an advanced role and rights management and a tool to track changes and who's responsible for them. On the technical side, for report production a reporting module integrated into VIVO is needed. For now VIVO is not technically geared for reporting, as its basic goal focuses on information representation in the web. VIVO provides a SPARQL query editor which can be used for reports, but it requires deep knowledge of SPARQL and the VIVO data model. a convenient reporting component should include a user interface which can be intuitively operated by administrative staff, normally not familiar with SPARQL. The user interface should offer a number of options to set a single report and offer export of data in different formats like CSV and PDF. Visualization of data in charts and diagrams has to be provided as well. This poster describes the developments (to be) conducted at the TIB, and the challenges, it has been facing concerning the use of VIVO for reporting.
Tatiana Walther , Technische Informationsbibliothek (TIB) – German National Library of Science and Technology
It is not sufficient to translate the labels of the VIVO-ISF ontology to adopt VIVO to the needs of German research institutions. There is a need for an ontology extension that is tailored to the specifics of the German academic landscape, especially with regard to language and (academic) culture. In order to involve as many stakeholders as possible, a collaborative approach to ontology management is needed. Basing the corresponding workflows solely on Git is an option, which is currently chosen by the German VIVO community (VIVO-DE). However, there is a demand for more user-friendly ways to work together. There is a small number of tools which are specifically developed for collaborative ontological work. One of them is VoCol. In this poster we would like to describe possible use cases for VoCol in the VIVO-DE context, challenges we anticipate, and our suggestions how to face them in the future. VoCol has been originally developed at Fraunhofer Institute IAIS in Bonn and from 2018 on will be evolved in collaboration by Fraunhofer IAIS and the Technische Informationsbibliothek (TIB) – German National Library of Science and Technology in Hannover. It is based on Open Source software such as Java libraries and the Jena Fuseki SPARQL server, and requires nothing from the user but a standard Git-based repository. The application serves as a frontend in order to facilitate collaborative ontology development. It supports the Semantic Web standards OWL and SKOS, and features various functionalities such as a Turtle editor, a syntax validation routine, options for an automated documentation, tools for ontology visualization, evolution reports, content negotiation, and a SPARQL endpoint. VoCol is currently being tested at the TIB and by other VIVO-DE community members with the goal to assess its suitability for collaborative editing and management of the KDSF-VIVO-Alignment and VIVO-DE-Extension vocabularies. Despite the aforementioned benefits of VoCol, there is still room for improvement, for example regarding performance and a more intuitive user guidance. Moreover, VoCol could be taken to the next level by adding more functionalities such as more sophisticated structural validation and verification routines, and alignment tools that allow to establish mappings between several vocabularies.
Due to a number of state and federal regulations and other obligations, publicly funded institutions in Germany have to fulfil a variety of reporting duties. One example of such a regulation is the guideline for transparency in research in the German federal state Lower Saxony. This guideline frames, which information about third party founded research projects has to be made publicly available by the universities in Lower Saxony. Besides the federal bodies, there are also German governmental and European funding agencies like Leibniz Association, European Commission, German Research Foundation (DFG, Deutsche Forschungsgemeinschaft) which demand reports about staff, research activities, infrastructure, and other information. Compliance with certain standards is another key feature for reporting. For example, the German Science Council asks research institutions to collect research information according to defined standardized criteria (Research Core Data Set, KDSF). The TIB transformed the data model of the KDSF to make it usable in VIVO. CERIF is another standard of high importance for German research institutions. The Technische Informationsbibliothek (TIB) – German National Library of Science and Technology has decided to use VIVO for reporting. In the scope of VIVO-KDSF project, an internal VIVO is going to be used for generation of reports in accordance with the KDSF. This poses some technical and ontological challenges to a standard out-of-the-box VIVO. To allow the use of VIVO in such a context, it has to comply to a set of laws, rules and regulations, e.g. regarding privacy, and protection of employees. These require some information to be visible only to specific user groups. Furthermore, to achieve a high quality of research information, there is a need for validating and editing workflows. To establish these workflows, some developmental work on VIVO has to be done. This includes amongst others an advanced role and rights management and a tool to track changes and who's responsible for them. On the technical side, for report production a reporting module integrated into VIVO is needed. For now VIVO is not technically geared for reporting, as its basic goal focuses on information representation in the web. VIVO provides a SPARQL query editor which can be used for reports, but it requires deep knowledge of SPARQL and the VIVO data model. a convenient reporting component should include a user interface which can be intuitively operated by administrative staff, normally not familiar with SPARQL. The user interface should offer a number of options to set a single report and offer export of data in different formats like CSV and PDF. Visualization of data in charts and diagrams has to be provided as well. This poster describes the developments (to be) conducted at the TIB, and the challenges, it has been facing concerning the use of VIVO for reporting.
User needs and requirements are one of the fundamental areas where we can engage the VIVO community in order to better understand how the VIVO software can evolve to provide improved services to its users. We propose a panel highlighting examples of obtaining researcher and faculty feedback on VIVO using both informal and formal user-centered methods. We will share what we have done to gather input from users, how this work has informed or could inform our design and development for our individual VIVO instances, and how our different use cases and user stories could help inform future directions for VIVO. We will encourage discussion about what others in the VIVO community or related research information management communities see as potentially important directions for design and development. EarthCollab is a National Science Foundation EarthCube grant-funded project which seeks to use VIVO to model and interlink information about GeoScience projects, data, and contributors. Feedback and evaluations from researchers and faculty on VIVO instances developed as part of this project were obtained through multiple methods, including an in-person one-day workshop, a survey, usability testing, a focus group, and task-oriented user sessions and interviews with scientific stakeholders. This set of feedback helped inform the design of the project’s two VIVO instances and shed light on some of the ontological needs for the project. We will provide a summary of approaches and results of our work to date. The German National Library of Science and Technology (TIB) has used multiple methods to research user needs in research information systems. We conducted a study about user behavior with regard to research information systems using semi-structured interviews with 30 scientists from both universities and non-university research institutions. Additionally, we conducted a survey, which included CRIS related questions, of 1,460 researchers about the information and publication behavior of researchers in science and technology. Furthermore, the TIB hosted the 2nd VIVO workshop with 40 participants from various German-speaking universities and institutions. Part of this workshop was devoted to having participants discuss and prioritize future work in VIVO development. The report from this workshop states that “Improved functionalities for research reporting are heavily desired here.” We will discuss the overall results from this in addition to other feedback and impressions we have received from institutions in Germany who are exploring the use of VIVO. The Scholars@Duke team holds quarterly (previously monthly) user group meetings where they engage with the Scholars and Elements user communities. Additionally, the team holds annual faculty focus group lunches to solicit faculty feedback, conducts usability testing with faculty and power users, engages in multiple meetings with faculty members and researchers one-on-one, and regularly demonstrates the Scholars application to researchers and faculty. The Scholars team also leverages user metrics from Google Analytics and Tableau to augment end user feedback to further inform our design decisions. As part of this panel, we will discuss the multiple approaches used to gather feedback and how this feedback, and user metrics, informed design or spurred development directions.
According to the Global Application and Network Security Report 2007-2018, cyber attacks spiked by 40 percent in the year 2017 and half of the surveyed companies reported financially motivated cyber attacks on them. Concerning information security, BSI - the German federal institute for information security developed an advisory catalog for IT security in Germany. The catalog highlights the necessary policies and strategies for IT infrastructures to adopt in order to meet the requirements of the modern day world information security and standardization. A study of the catalog revealed that VIVO lacks implementation of some of the key security features like a) browser session expiration b) secure and salted password hashing and c) exclusive labeling of external URLs and adding tooltips to forms, fields, and buttons. Furthermore, there are some suggestions that institutions who use VIVO or plan to use it, should take into consideration. This poster/presentation focuses on the security-related technical challenges and their possible solutions the TIB Hannover needs to implement in VIVO to meet the standards of the BSI IT security catalog.
Christian Hauschke , Technische Informationsbibliothek (TIB) – German National Library of Science and Technology
According to the Global Application and Network Security Report 2007-2018, cyber attacks spiked by 40 percent in the year 2017 and half of the surveyed companies reported financially motivated cyber attacks on them. Concerning information security, BSI - the German federal institute for information security developed an advisory catalog for IT security in Germany. The catalog highlights the necessary policies and strategies for IT infrastructures to adopt in order to meet the requirements of the modern day world information security and standardization. A study of the catalog revealed that VIVO lacks implementation of some of the key security features like a) browser session expiration b) secure and salted password hashing and c) exclusive labeling of external URLs and adding tooltips to forms, fields, and buttons. Furthermore, there are some suggestions that institutions who use VIVO or plan to use it, should take into consideration. This poster/presentation focuses on the security-related technical challenges and their possible solutions the TIB Hannover needs to implement in VIVO to meet the standards of the BSI IT security catalog.
Martin Barber , Technische Informationsbibliothek (TIB) – German National Library of Science and Technology
According to the Global Application and Network Security Report 2007-2018, cyber attacks spiked by 40 percent in the year 2017 and half of the surveyed companies reported financially motivated cyber attacks on them. Concerning information security, BSI - the German federal institute for information security developed an advisory catalog for IT security in Germany. The catalog highlights the necessary policies and strategies for IT infrastructures to adopt in order to meet the requirements of the modern day world information security and standardization. A study of the catalog revealed that VIVO lacks implementation of some of the key security features like a) browser session expiration b) secure and salted password hashing and c) exclusive labeling of external URLs and adding tooltips to forms, fields, and buttons. Furthermore, there are some suggestions that institutions who use VIVO or plan to use it, should take into consideration. This poster/presentation focuses on the security-related technical challenges and their possible solutions the TIB Hannover needs to implement in VIVO to meet the standards of the BSI IT security catalog.
Qazi Asim Ijaz Ahmad , Technische Informationsbibliothek (TIB) – German National Library of Science and Technology
After a year of development and testing, Brown University rolled out a new frontend to VIVO in December 2017. The goals of the new frontend were twofold: an improved, more modern user experience, and easier customization and enhancements. The Brown team accomplished this by creating a standalone Ruby on Rails frontend, decoupling the user-facing application from the VIVO backend. Decoupling the frontend and the backend permits easier and more frequent updates to the user experience, while allowing the VIVO application to stay in place as is. This preserves the core features of VIVO, like semantic data and RDF publishing facilities. The new approach enables Brown to build a modern user experience on top of semantic data without having to compromise either. In this presentation, Brown’s lead VIVO developers will review the new frontend application and the overall project architecture. They will follow up with a description of the project rollout, including obstacles encountered and solutions. This will lead into an analysis of the current state of the project: ongoing and future enhancements to the frontend, and how they are facilitated by the new architecture. New features that are already or soon to be available include data visualizations and mobile-friendly design. The presentation concludes with plans for future development at Brown, and recommendations for how this model can be adopted by the general VIVO community.
Steven Mccauley , Brown University
After a year of development and testing, Brown University rolled out a new frontend to VIVO in December 2017. The goals of the new frontend were twofold: an improved, more modern user experience, and easier customization and enhancements. The Brown team accomplished this by creating a standalone Ruby on Rails frontend, decoupling the user-facing application from the VIVO backend. Decoupling the frontend and the backend permits easier and more frequent updates to the user experience, while allowing the VIVO application to stay in place as is. This preserves the core features of VIVO, like semantic data and RDF publishing facilities. The new approach enables Brown to build a modern user experience on top of semantic data without having to compromise either. In this presentation, Brown’s lead VIVO developers will review the new frontend application and the overall project architecture. They will follow up with a description of the project rollout, including obstacles encountered and solutions. This will lead into an analysis of the current state of the project: ongoing and future enhancements to the frontend, and how they are facilitated by the new architecture. New features that are already or soon to be available include data visualizations and mobile-friendly design. The presentation concludes with plans for future development at Brown, and recommendations for how this model can be adopted by the general VIVO community.
Hector Correa , Brown University
After a year of development and testing, Brown University rolled out a new frontend to VIVO in December 2017. The goals of the new frontend were twofold: an improved, more modern user experience, and easier customization and enhancements. The Brown team accomplished this by creating a standalone Ruby on Rails frontend, decoupling the user-facing application from the VIVO backend. Decoupling the frontend and the backend permits easier and more frequent updates to the user experience, while allowing the VIVO application to stay in place as is. This preserves the core features of VIVO, like semantic data and RDF publishing facilities. The new approach enables Brown to build a modern user experience on top of semantic data without having to compromise either. In this presentation, Brown’s lead VIVO developers will review the new frontend application and the overall project architecture. They will follow up with a description of the project rollout, including obstacles encountered and solutions. This will lead into an analysis of the current state of the project: ongoing and future enhancements to the frontend, and how they are facilitated by the new architecture. New features that are already or soon to be available include data visualizations and mobile-friendly design. The presentation concludes with plans for future development at Brown, and recommendations for how this model can be adopted by the general VIVO community.
Jean Rainwater , Brown University
Stanford University is a large research institution, with over $1.6 billion in sponsored research. Along with other sources of funding, this produces an enormous amount of research. The results of this work are published in journals or books, presented at conferences, taught in classes and workshops, and captured in data sets. In order to help Stanford work towards new opportunities and fulfill its mission, we seek to build a system that helps understand and catalog this research output, capture it in preservable form, and understand how it is interconnected. Stanford currently maintains separate systems for tracking researchers, grants, publications and projects, but it has no system for combining this information and further tracking and managing its research output: the tangible artifacts of articles, data, books and projects that advance human knowledge. RIALTO is a system that is designed to close that loop, helping provide a holistic picture of the University’s activity and impact, while also eliminating waste through inefficient, duplicate data entry and opportunity costs stemming from lack of information. We are seeking to use VIVO as the system that underlies RIALTO, leveraging the work already done in linked data storage, ontologies and overall architecture. We are also seeking to leverage the and contribute to work around building novel reporting user interfaces and visualizations on top of the core VIVO codebase. Work on this system will take place at Stanford in the spring of 2018. This work will include the installation of a core VIVO, the building of connectors to our citation database, biographical database and a limited amount of funding information. We also intend to build a user interface with several reports based on use cases gathered from across Stanford. At VIVO 2018, we intend to discuss the results of our initial work, including use cases gathered, technical progress to date, lessons learned, and future directions.
Peter Mangiafico , Stanford University
Stanford University is a large research institution, with over $1.6 billion in sponsored research. Along with other sources of funding, this produces an enormous amount of research. The results of this work are published in journals or books, presented at conferences, taught in classes and workshops, and captured in data sets. In order to help Stanford work towards new opportunities and fulfill its mission, we seek to build a system that helps understand and catalog this research output, capture it in preservable form, and understand how it is interconnected. Stanford currently maintains separate systems for tracking researchers, grants, publications and projects, but it has no system for combining this information and further tracking and managing its research output: the tangible artifacts of articles, data, books and projects that advance human knowledge. RIALTO is a system that is designed to close that loop, helping provide a holistic picture of the University’s activity and impact, while also eliminating waste through inefficient, duplicate data entry and opportunity costs stemming from lack of information. We are seeking to use VIVO as the system that underlies RIALTO, leveraging the work already done in linked data storage, ontologies and overall architecture. We are also seeking to leverage the and contribute to work around building novel reporting user interfaces and visualizations on top of the core VIVO codebase. Work on this system will take place at Stanford in the spring of 2018. This work will include the installation of a core VIVO, the building of connectors to our citation database, biographical database and a limited amount of funding information. We also intend to build a user interface with several reports based on use cases gathered from across Stanford. At VIVO 2018, we intend to discuss the results of our initial work, including use cases gathered, technical progress to date, lessons learned, and future directions.
Tom Cramer , Stanford University
“Rialto” is (will be…) a scholarship & research output and impact data aggregator and dashboard for the campus to use. As part of our assessment of what kind of back-end to use, we are in the process of doing a number of load tests, exploring and testing the various ways to get data into Vitro. https://github.com/sul-dlss/rialto/wiki/Performance-Assssment Discussion about data loading techniques people use; what works well, and what works not-so-well.
Josh Greben , Stanford University
A lot of work has been done by VIVO developers over the last several years on linking people on seperate VIVO instances. What went right? What went wrong? What can we do to move this forward? Are there solutions already out there we can bring in and use?
Ralph O’Flinn , The University of Alabama at Birmingham
Much of the VIVO code base is understood only by a few original core developers. For VIVO to grow and thrive, we need new developers with new energy and new ideas. But where will they come from? The goal of this workshop is to provide developers with a practical, hands-on introduction to selected VIVO development approaches and debugging methods. We also hope to highlight the process for contributing back to the VIVO core code. The workshop is intended to encourage further developer engagement in the VIVO community and to help begin training developers and build a broader base of VIVO committers. Attendees will be provided a Virtual Machine with VIVO and the Eclipse IDE editor prior to the workshop and will be led through the workshop by VIVO developers
We have a virtual machine for prototyping VIVO. What works and doesn’t work? What are the next steps for maintaining this in the vivo-community repository.
Don Elsborg , University of Colorado Boulder
The VIVO code base has grown over 15 years to more than half a million lines of Enterprise Java code. Experience has shown a steep learning curve for new developers especially front end developers, and difficulty in integrating newer web development technologies and approaches. Is there an opportunity to experiment with new technologies and techniques that are easier for new developers to dive into? The VIVO Product Evolution group is leading an effort to turn this opportunity into a reality. The vision is to prototype an agile web/mobile application that showcases the researchers, units, and scholarly works of an institution with their branding. This workshop will be a working session for the VIVO Product Evolution group, but also an occasion to engage with other interested VIVO community members. The workshop will include updates and discussion involving the group leadership and current subgroups, lightning talks exploring new technologies and methods, and breakout sessions for the subgroups. The current subgroups: * Representing VIVO Data * Functional requirements * Implementing the presentation layer Technologies, standards, and approaches under evaluation: * JSON and JSON-LD * SOLR and ElasticSearch * GraphQL * Schema.org as well as other scholarly information and data models (CERIF, CASRAI, COAR, CD2H) * Javascript Frameworks (React, Angular, Vue, etc) * Modern Agile principles
Alex Viggio , University of Colorado Boulder
User needs and requirements are one of the fundamental areas where we can engage the VIVO community in order to better understand how the VIVO software can evolve to provide improved services to its users. We propose a panel highlighting examples of obtaining researcher and faculty feedback on VIVO using both informal and formal user-centered methods. We will share what we have done to gather input from users, how this work has informed or could inform our design and development for our individual VIVO instances, and how our different use cases and user stories could help inform future directions for VIVO. We will encourage discussion about what others in the VIVO community or related research information management communities see as potentially important directions for design and development. EarthCollab is a National Science Foundation EarthCube grant-funded project which seeks to use VIVO to model and interlink information about GeoScience projects, data, and contributors. Feedback and evaluations from researchers and faculty on VIVO instances developed as part of this project were obtained through multiple methods, including an in-person one-day workshop, a survey, usability testing, a focus group, and task-oriented user sessions and interviews with scientific stakeholders. This set of feedback helped inform the design of the project’s two VIVO instances and shed light on some of the ontological needs for the project. We will provide a summary of approaches and results of our work to date. The German National Library of Science and Technology (TIB) has used multiple methods to research user needs in research information systems. We conducted a study about user behavior with regard to research information systems using semi-structured interviews with 30 scientists from both universities and non-university research institutions. Additionally, we conducted a survey, which included CRIS related questions, of 1,460 researchers about the information and publication behavior of researchers in science and technology. Furthermore, the TIB hosted the 2nd VIVO workshop with 40 participants from various German-speaking universities and institutions. Part of this workshop was devoted to having participants discuss and prioritize future work in VIVO development. The report from this workshop states that “Improved functionalities for research reporting are heavily desired here.” We will discuss the overall results from this in addition to other feedback and impressions we have received from institutions in Germany who are exploring the use of VIVO. The Scholars@Duke team holds quarterly (previously monthly) user group meetings where they engage with the Scholars and Elements user communities. Additionally, the team holds annual faculty focus group lunches to solicit faculty feedback, conducts usability testing with faculty and power users, engages in multiple meetings with faculty members and researchers one-on-one, and regularly demonstrates the Scholars application to researchers and faculty. The Scholars team also leverages user metrics from Google Analytics and Tableau to augment end user feedback to further inform our design decisions. As part of this panel, we will discuss the multiple approaches used to gather feedback and how this feedback, and user metrics, informed design or spurred development directions.
Matthew Mayernik , National Center for Atmospheric Research
User needs and requirements are one of the fundamental areas where we can engage the VIVO community in order to better understand how the VIVO software can evolve to provide improved services to its users. We propose a panel highlighting examples of obtaining researcher and faculty feedback on VIVO using both informal and formal user-centered methods. We will share what we have done to gather input from users, how this work has informed or could inform our design and development for our individual VIVO instances, and how our different use cases and user stories could help inform future directions for VIVO. We will encourage discussion about what others in the VIVO community or related research information management communities see as potentially important directions for design and development. EarthCollab is a National Science Foundation EarthCube grant-funded project which seeks to use VIVO to model and interlink information about GeoScience projects, data, and contributors. Feedback and evaluations from researchers and faculty on VIVO instances developed as part of this project were obtained through multiple methods, including an in-person one-day workshop, a survey, usability testing, a focus group, and task-oriented user sessions and interviews with scientific stakeholders. This set of feedback helped inform the design of the project’s two VIVO instances and shed light on some of the ontological needs for the project. We will provide a summary of approaches and results of our work to date. The German National Library of Science and Technology (TIB) has used multiple methods to research user needs in research information systems. We conducted a study about user behavior with regard to research information systems using semi-structured interviews with 30 scientists from both universities and non-university research institutions. Additionally, we conducted a survey, which included CRIS related questions, of 1,460 researchers about the information and publication behavior of researchers in science and technology. Furthermore, the TIB hosted the 2nd VIVO workshop with 40 participants from various German-speaking universities and institutions. Part of this workshop was devoted to having participants discuss and prioritize future work in VIVO development. The report from this workshop states that “Improved functionalities for research reporting are heavily desired here.” We will discuss the overall results from this in addition to other feedback and impressions we have received from institutions in Germany who are exploring the use of VIVO. The Scholars@Duke team holds quarterly (previously monthly) user group meetings where they engage with the Scholars and Elements user communities. Additionally, the team holds annual faculty focus group lunches to solicit faculty feedback, conducts usability testing with faculty and power users, engages in multiple meetings with faculty members and researchers one-on-one, and regularly demonstrates the Scholars application to researchers and faculty. The Scholars team also leverages user metrics from Google Analytics and Tableau to augment end user feedback to further inform our design decisions. As part of this panel, we will discuss the multiple approaches used to gather feedback and how this feedback, and user metrics, informed design or spurred development directions.
Don Stott , National Center for Atmospheric Research
This poster will report the process, findings, and next steps of the Research Graph VIVO Cloud Pilot, a collaboration between DuraSpace and Research Graph. Many VIVO implementers find collecting, mapping, and loading data into VIVO to be quite difficult. For example, data on publications, grants, and datasets produced by an institution’s faculty can be difficult to find and disambiguate. Understanding the ontologies used to describe data in VIVO and mapping faculty data to those ontologies involves a steep learning curve. Also, transforming the data to a linked data format, such as VIVO RDF, has proven difficult for most implementers due to gaps in skills and knowledge. These barriers have prevented organizations from joining the VIVO community and adopting the technology that enables access, discovery, and analysis of scholarship data. Research Graph is an integrated network of information about researchers, their publications, grants, and datasets, across global research infrastructures such as ORCID, DataCite, CERN, CrossRef, and funders such as National Institutes of Health (NIH). For example, when provided “seed data,” such as a simple list of researchers, Research Graph will identify publications, grants, and/or datasets related to those researchers and represent the information in a graph. These are referred to as “first order” connections. Research Graph is also capable of identifying and linking collaborators of the people in the “first order” data and linking their publications, grants and datasets. These collaborator links are referred to as “second order” connections. A recent collaboration between VIVO and Research Graph developed a repeatable process for using seed data to build first and second order graphs, and to export, transform, and load those graphs in VIVO RDF format to a hosted VIVO instance. We believe 1) Repositories and Research Institutes, 2) Semantic Web Sites of government and research organizations, and 3) Current VIVO Sites that wish to enrich and augment their data can benefit from the collaboration between VIVO and Research Graph. The Cloud Pilot will have participants representing these three types of organizations. The project will determine the value and potential of a long-term collaboration between VIVO and Research Graph in the form of new services that could reduce barriers for organizations that want to find, disambiguate, transform, and map research data.
Heather Greer Klein , DuraSpace
In the context of VIVO, “the value of making scholarly data open, found, and consumed” is predicated on a solid foundation of vibrant and sustainable core software, as well as a community aligned on strategic initiatives. This session will discuss specific ways that the community is evolving in order to be more inclusive, more effective, and more focused as we move towards a revitalized VIVO platform.
Andrew Woods , DuraSpace
Gathering data for a VIVO implementation is known to be a chore -- publications, grants, and datasets can be difficult to identify and disambiguate. For VIVO sites, the “golden query” is easily stated: “Find all the works of my institution for time period x.” If sites were able to execute the golden query, they could get all the works of their people on a timely basis. Combining these works with local data such as positions, courses taught, photos, overviews, and contact information would lead to a fully populated VIVO with minimal effort. For years, VIVO sites have been harvesting data from a wide variety of sources in search of the golden query. Dimensions is a new licensed product of Digital Science, that organizes and presents information on scholarly works. Dimensions contains information on people, publications, grants, clinical trials, and data sets. Dimensions has an API that can be used to gather and return data using the Dimensions Search Language (DSL). Using DSL, queries can be written to find the works of scholars at institutions in specified time periods. In this poster, we demonstrate the use of DSL queries from Python which are then transformed to VIVO RDF. The software can be found at https://github.com/mconlon17/vivo-dimensions Has the golden query been found? Stop by our poster to find out.
Michael Conlon , VIVO Project
This workshop will offer a hands on opportunity to work with the newly released Dimensions API. Dimensions is a unique linked research knowledge system linking and standardizing publications, clinical trials, patents, and funded grants awards metadata across hundreds of data sources globally. Working with Jupyter notebooks, workshop participants will be given the opportunity to work through a number of use cases including: * How to produce VIVO RDF from the Dimensions API * How to create collaboration diagrams based on Dimensions API searches * Approaches to creating your own metrics with multiple research sources * An approach to research demographic analysis The workshop will be limited to 20. This workshop is intended for: Researchers intending on using the Dimensions API as part of their research Institutions using or considering a Dimensions subscription
Creating an institutional research profiling system requires herculean acts of data corralling from multiple internal institutional systems. Within an institution, the knowledge that you need resides in HR, Finance, Grants, Publication management, and Student Systems to name a few. Getting access to this information requires multiple negotiations with many institutional stakeholders. This process requires extreme patience on your part to explain to internal stakeholders why information collected for one purpose can also be used for another. When you do get access to the information that you need, in many cases it is not in the format that you would ideally like. The titles of projects can be in ALL CAPS, HR position titles can be truncated, dollar amount for grants that do not reflect the external amount for the award. How do we turn this situation around? How do we increase the awareness of the data stewards of HR, Finance, and Student Systems that they are research information stewards too? Within an institutional context, how can we define what it is to be a research information citizen? Building on a similar exercise held at Pidapalooza earlier this year, participants at this workshop will aim to identify what the shared understanding and norms surrounding how we handle and communicate the research information within an institution should be. In doing so, it is hoped that we can take the first step towards baking research profiling into the reasons that institutions collect information in the first place.
Gathering data for a VIVO implementation is known to be a chore -- publications, grants, and datasets can be difficult to identify and disambiguate. For VIVO sites, the “golden query” is easily stated: “Find all the works of my institution for time period x.” If sites were able to execute the golden query, they could get all the works of their people on a timely basis. Combining these works with local data such as positions, courses taught, photos, overviews, and contact information would lead to a fully populated VIVO with minimal effort. For years, VIVO sites have been harvesting data from a wide variety of sources in search of the golden query. Dimensions is a new licensed product of Digital Science, that organizes and presents information on scholarly works. Dimensions contains information on people, publications, grants, clinical trials, and data sets. Dimensions has an API that can be used to gather and return data using the Dimensions Search Language (DSL). Using DSL, queries can be written to find the works of scholars at institutions in specified time periods. In this poster, we demonstrate the use of DSL queries from Python which are then transformed to VIVO RDF. The software can be found at https://github.com/mconlon17/vivo-dimensions Has the golden query been found? Stop by our poster to find out.
Simon Porter , Digital Science
The rising interest in advanced research analytics at university leadership level calls for innovative solutions to keep up with demand. The Technical University of Denmark has started the development of a VIVO-based Research Analytics Platform (VIVO RAP) and has recently released the first two modules of the service. VIVO RAP imports data from the Web of Science (WoS) and draws on the InCites API. It runs as an internal university service, but the software is available as open source on GitHub and may be used and adapted by anyone. The VIVO RAP – to be launched in April 2018 – initially features a university collaboration and a university publication module. Reports analyzing the university’s global collaboration support the university leadership in understanding the overall collaboration landscape and the nature and impact of the individual collaborations – at university as well as department levels. This may aid in identifying existing collaborations to be strengthened or new ones to be initiated – and thus ultimately strengthen the research. For all university affiliated publications, complete metadata is imported from WoS and converted to linked VIVO data with the necessary ontology extensions. While WoS features good control of organizational entities at the university level (Organization Enhanced), this is not the case at the university department or other sub-organizational levels. As precision at this level is essential for the VIVO RAP analytics, a mapping method was developed to automatically assign publications to the right department – a local “Department Enhanced”. The presentation will: • Review the motivation for the project, the “before” situation, the chosen architecture and its main components, the development and testing approaches • Demonstrate the resulting services of the version 1 release – with primary focus on the collaboration analytics module • Review the plans for the coming version 2 release, later this year • Review the WoS/InCites API, functionality, technical specs and mission in the Clarivate portfolio • Stipulate the plans for the coming years, including new modules and functionalities, new data sets and types, new development partnerships within the VIVO community.
Christina Steensboe , Technical University of Denmark
The rising interest in advanced research analytics at university leadership level calls for innovative solutions to keep up with demand. The Technical University of Denmark has started the development of a VIVO-based Research Analytics Platform (VIVO RAP) and has recently released the first two modules of the service. VIVO RAP imports data from the Web of Science (WoS) and draws on the InCites API. It runs as an internal university service, but the software is available as open source on GitHub and may be used and adapted by anyone. The VIVO RAP – to be launched in April 2018 – initially features a university collaboration and a university publication module. Reports analyzing the university’s global collaboration support the university leadership in understanding the overall collaboration landscape and the nature and impact of the individual collaborations – at university as well as department levels. This may aid in identifying existing collaborations to be strengthened or new ones to be initiated – and thus ultimately strengthen the research. For all university affiliated publications, complete metadata is imported from WoS and converted to linked VIVO data with the necessary ontology extensions. While WoS features good control of organizational entities at the university level (Organization Enhanced), this is not the case at the university department or other sub-organizational levels. As precision at this level is essential for the VIVO RAP analytics, a mapping method was developed to automatically assign publications to the right department – a local “Department Enhanced”. The presentation will: • Review the motivation for the project, the “before” situation, the chosen architecture and its main components, the development and testing approaches • Demonstrate the resulting services of the version 1 release – with primary focus on the collaboration analytics module • Review the plans for the coming version 2 release, later this year • Review the WoS/InCites API, functionality, technical specs and mission in the Clarivate portfolio • Stipulate the plans for the coming years, including new modules and functionalities, new data sets and types, new development partnerships within the VIVO community.
Karen Hytteballe Ibanez , Technical University of Denmark
The rising interest in advanced research analytics at university leadership level calls for innovative solutions to keep up with demand. The Technical University of Denmark has started the development of a VIVO-based Research Analytics Platform (VIVO RAP) and has recently released the first two modules of the service. VIVO RAP imports data from the Web of Science (WoS) and draws on the InCites API. It runs as an internal university service, but the software is available as open source on GitHub and may be used and adapted by anyone. The VIVO RAP – to be launched in April 2018 – initially features a university collaboration and a university publication module. Reports analyzing the university’s global collaboration support the university leadership in understanding the overall collaboration landscape and the nature and impact of the individual collaborations – at university as well as department levels. This may aid in identifying existing collaborations to be strengthened or new ones to be initiated – and thus ultimately strengthen the research. For all university affiliated publications, complete metadata is imported from WoS and converted to linked VIVO data with the necessary ontology extensions. While WoS features good control of organizational entities at the university level (Organization Enhanced), this is not the case at the university department or other sub-organizational levels. As precision at this level is essential for the VIVO RAP analytics, a mapping method was developed to automatically assign publications to the right department – a local “Department Enhanced”. The presentation will: • Review the motivation for the project, the “before” situation, the chosen architecture and its main components, the development and testing approaches • Demonstrate the resulting services of the version 1 release – with primary focus on the collaboration analytics module • Review the plans for the coming version 2 release, later this year • Review the WoS/InCites API, functionality, technical specs and mission in the Clarivate portfolio • Stipulate the plans for the coming years, including new modules and functionalities, new data sets and types, new development partnerships within the VIVO community.
Mogens Sandfaer , Technical University of Denmark
As the VIVO community begins to explore the nature of its next generation of platform, it is useful to consider a wide range of architectures and technologies. This is particularly true in the area of research profiling, as there is a key need to expand the current model beyond the historical focus of grants and publications and include other forms of scholarly and professional works. This presentation focuses on moving beyond ‘research profiling,’ adopting a more inclusive notion of ‘expertise profiling and discovery.’ Expanding our notion of expertise also requires expanding our sources of data regarding that expertise, as well as how management of those data can be integrated into a unified user experience, while minimizing the effort necessary to maintain those data. The presentation will focus on the following three areas, with brief demonstrations of how each can be achieved. An ontology-derived core. As presented in my VIVO-2017 presentation, we have a running prototype of a VIVO-ISF-compliant platform synthesized from the OWL ontology and an exemplar triplestore (the OpenVIVO data dump). This Tomcat - Java Server Pages - JSP Tag Library stack completely encapsulates the core SPARQL interaction with the triplestore in generated code, allowing developers to concentrate on the user interface level. Dynamic blending of data from multiple sources. The OpenVIVO data include numerous DOIs referring to slide decks and posters deposited in FigShare. I’ll show how those DOIs can be readily recognized and used to dynamically embed the FigShare artifacts into the relevant presentation page, supporting both direct browsing and click-through to the FigShare site. This feature serves as an example of extending the user interface through direct connection at run-time with an external resource. Reformulation of external-derived data as optional modules. The primary target population for the Center for Data to Health (CD2H) is the informatics community within the Clinical and Translational Science Award (CTSA) consortium. This community is heavily invested in multiple shared software systems, many of which are open source. I will discuss our work in identification and modeling of relevant users, organizations and repositories in GitHub and how that data - through the use of a sameAs assertion between an OpenVIVO person URI and a GitHub login can be used to mesh GitHub metadata into our VIVO-ISF-compliant prototype, without modification of the core tag libraries. This feature serves as an example of modular composition of components at the user interface level. As one of the leads for the People, Expertise and Attribution Working Group within the newly-created NIN-NCATS Center for Data to Health (https://ctsa.ncats.nih.gov/cd2h/), I invite the VIVO community to engage with us in exploring how we can jointly take these concepts to fruition in the near future.
Formed in late 2017 by a grant from the NIH National Center for Advancing Translational Sciences (NCATS), the National Center for Data to Health (CD2H) is charged with supporting a vibrant and evolving collaborative informatics ecosystem for the Clinical and Translational Science Awards (CTSA) Program and beyond. The CD2H harnesses and expands an ecosystem for translational scientists to discover and share their software, data, and other research resources within the CTSA Program network. The CD2H also creates a social coding environment for translational science institutions, leveraging the community-driven DREAM challenges as a mechanism to stimulate innovation. Collaborative innovation also serves as a strong foundation to support mechanisms to facilitate training, engagement, scholarly dissemination, and impact in translational science. Both VIVO and the CD2H share a common heritage in large, multi-site, collaborative community-driven activities. Indeed, community engagement across a diverse array of professionals is at the core of both groups. The planned work by CD2H in expertise modeling will generate natural extensions to the existing VIVO-ISF ontologies, and will demonstrate the value of a modular approach to ontologies in representing the many contributions and activities in scholarship. We are particularly interested in the collaborative development of new frameworks that can mutually benefit both communities, and will support open workgroups in the areas described below.
David Eichmann , University of Iowa
Several institutions are exploring new methods to evaluate the impact of pilot funding programs within their institutions. Common approaches include assessing the resulting number of publications and grant awards received by funded teams. The South Carolina Clinical and Translational Research Institute, a Clinical and Translational Science Award (CTSA) funded entity at the Medical University of South Carolina (MUSC), has awarded pilot funding to many investigators over the past several years. At MUSC, we have implemented the Harvard Profiles Research Networking Software (RNS). RNS data provide unique opportunities for clean, disambiguated bibliometric data that can be leveraged for network analysis, albeit limited to currently affiliated faculty at a single institution. Naturally, newly enrolled faculty will not have many intra-institutional collaborators, and this number grows during the faculty’s tenure at the institution. Moreover, publications that acknowledge the CTSA resulting from pilot funding constitute a small fraction of the total RNS publications. Here we explore various methods for overcoming these limitations. One of the metrics we are examining is team formation as impacted by pilot funding. We are using the number of unique co-authors on publications or degree centrality as a proxy measure for increased team science. Given the above-mentioned limitations, in order to compare pilot funded individuals with their peers, several variables have to be considered to level the playing field between the two groups. We examined several variables to assess their correlation with degree centrality, including the time in years since the individual’s first publication at MUSC, the number of total publications for an individual, and the number of publications for an individual since arrival at MUSC. We also examined the interaction between pilot funding and these variables using multi-variate linear regression. The correlation coefficients were all positive for: time in years since first MUSC publication (0.71), number of total publications (0.13), and number of publications since arrival at MUSC (0.38), with p-values <0.0001 for all the models. However, the adjusted r-squared values were 0.19, 0.29 and 0.56 respectively, revealing best fit for the regression model between degree centrality and the number of publications for an individual since arrival at MUSC when using RNS data. Adding the binary variable of whether an individual was pilot funded (n=75) or not, also had a significant positive impact on degree centrality, p-value=0.003. Given the limitations of using RNS data from a single institution, we believe controlling for the right variables will overcome some of the limitations. Therefore, network analysis using RNS data may yield meaningful insights in assessing the impact of funding on collaborative work. Future work involves looking at the impact of other variables on the analysis, such as faculty rank, and trend over time. We also intend to examine inter-departmental publications as a proxy for interdisciplinarity.
Jihad Obeid , Medical University of South Carolina
Several institutions are exploring new methods to evaluate the impact of pilot funding programs within their institutions. Common approaches include assessing the resulting number of publications and grant awards received by funded teams. The South Carolina Clinical and Translational Research Institute, a Clinical and Translational Science Award (CTSA) funded entity at the Medical University of South Carolina (MUSC), has awarded pilot funding to many investigators over the past several years. At MUSC, we have implemented the Harvard Profiles Research Networking Software (RNS). RNS data provide unique opportunities for clean, disambiguated bibliometric data that can be leveraged for network analysis, albeit limited to currently affiliated faculty at a single institution. Naturally, newly enrolled faculty will not have many intra-institutional collaborators, and this number grows during the faculty’s tenure at the institution. Moreover, publications that acknowledge the CTSA resulting from pilot funding constitute a small fraction of the total RNS publications. Here we explore various methods for overcoming these limitations. One of the metrics we are examining is team formation as impacted by pilot funding. We are using the number of unique co-authors on publications or degree centrality as a proxy measure for increased team science. Given the above-mentioned limitations, in order to compare pilot funded individuals with their peers, several variables have to be considered to level the playing field between the two groups. We examined several variables to assess their correlation with degree centrality, including the time in years since the individual’s first publication at MUSC, the number of total publications for an individual, and the number of publications for an individual since arrival at MUSC. We also examined the interaction between pilot funding and these variables using multi-variate linear regression. The correlation coefficients were all positive for: time in years since first MUSC publication (0.71), number of total publications (0.13), and number of publications since arrival at MUSC (0.38), with p-values <0.0001 for all the models. However, the adjusted r-squared values were 0.19, 0.29 and 0.56 respectively, revealing best fit for the regression model between degree centrality and the number of publications for an individual since arrival at MUSC when using RNS data. Adding the binary variable of whether an individual was pilot funded (n=75) or not, also had a significant positive impact on degree centrality, p-value=0.003. Given the limitations of using RNS data from a single institution, we believe controlling for the right variables will overcome some of the limitations. Therefore, network analysis using RNS data may yield meaningful insights in assessing the impact of funding on collaborative work. Future work involves looking at the impact of other variables on the analysis, such as faculty rank, and trend over time. We also intend to examine inter-departmental publications as a proxy for interdisciplinarity.
Dayan Ranwala , Medical University of South Carolina
Several institutions are exploring new methods to evaluate the impact of pilot funding programs within their institutions. Common approaches include assessing the resulting number of publications and grant awards received by funded teams. The South Carolina Clinical and Translational Research Institute, a Clinical and Translational Science Award (CTSA) funded entity at the Medical University of South Carolina (MUSC), has awarded pilot funding to many investigators over the past several years. At MUSC, we have implemented the Harvard Profiles Research Networking Software (RNS). RNS data provide unique opportunities for clean, disambiguated bibliometric data that can be leveraged for network analysis, albeit limited to currently affiliated faculty at a single institution. Naturally, newly enrolled faculty will not have many intra-institutional collaborators, and this number grows during the faculty’s tenure at the institution. Moreover, publications that acknowledge the CTSA resulting from pilot funding constitute a small fraction of the total RNS publications. Here we explore various methods for overcoming these limitations. One of the metrics we are examining is team formation as impacted by pilot funding. We are using the number of unique co-authors on publications or degree centrality as a proxy measure for increased team science. Given the above-mentioned limitations, in order to compare pilot funded individuals with their peers, several variables have to be considered to level the playing field between the two groups. We examined several variables to assess their correlation with degree centrality, including the time in years since the individual’s first publication at MUSC, the number of total publications for an individual, and the number of publications for an individual since arrival at MUSC. We also examined the interaction between pilot funding and these variables using multi-variate linear regression. The correlation coefficients were all positive for: time in years since first MUSC publication (0.71), number of total publications (0.13), and number of publications since arrival at MUSC (0.38), with p-values <0.0001 for all the models. However, the adjusted r-squared values were 0.19, 0.29 and 0.56 respectively, revealing best fit for the regression model between degree centrality and the number of publications for an individual since arrival at MUSC when using RNS data. Adding the binary variable of whether an individual was pilot funded (n=75) or not, also had a significant positive impact on degree centrality, p-value=0.003. Given the limitations of using RNS data from a single institution, we believe controlling for the right variables will overcome some of the limitations. Therefore, network analysis using RNS data may yield meaningful insights in assessing the impact of funding on collaborative work. Future work involves looking at the impact of other variables on the analysis, such as faculty rank, and trend over time. We also intend to examine inter-departmental publications as a proxy for interdisciplinarity.
Tami Crawford , Medical University of South Carolina
Several institutions are exploring new methods to evaluate the impact of pilot funding programs within their institutions. Common approaches include assessing the resulting number of publications and grant awards received by funded teams. The South Carolina Clinical and Translational Research Institute, a Clinical and Translational Science Award (CTSA) funded entity at the Medical University of South Carolina (MUSC), has awarded pilot funding to many investigators over the past several years. At MUSC, we have implemented the Harvard Profiles Research Networking Software (RNS). RNS data provide unique opportunities for clean, disambiguated bibliometric data that can be leveraged for network analysis, albeit limited to currently affiliated faculty at a single institution. Naturally, newly enrolled faculty will not have many intra-institutional collaborators, and this number grows during the faculty’s tenure at the institution. Moreover, publications that acknowledge the CTSA resulting from pilot funding constitute a small fraction of the total RNS publications. Here we explore various methods for overcoming these limitations. One of the metrics we are examining is team formation as impacted by pilot funding. We are using the number of unique co-authors on publications or degree centrality as a proxy measure for increased team science. Given the above-mentioned limitations, in order to compare pilot funded individuals with their peers, several variables have to be considered to level the playing field between the two groups. We examined several variables to assess their correlation with degree centrality, including the time in years since the individual’s first publication at MUSC, the number of total publications for an individual, and the number of publications for an individual since arrival at MUSC. We also examined the interaction between pilot funding and these variables using multi-variate linear regression. The correlation coefficients were all positive for: time in years since first MUSC publication (0.71), number of total publications (0.13), and number of publications since arrival at MUSC (0.38), with p-values <0.0001 for all the models. However, the adjusted r-squared values were 0.19, 0.29 and 0.56 respectively, revealing best fit for the regression model between degree centrality and the number of publications for an individual since arrival at MUSC when using RNS data. Adding the binary variable of whether an individual was pilot funded (n=75) or not, also had a significant positive impact on degree centrality, p-value=0.003. Given the limitations of using RNS data from a single institution, we believe controlling for the right variables will overcome some of the limitations. Therefore, network analysis using RNS data may yield meaningful insights in assessing the impact of funding on collaborative work. Future work involves looking at the impact of other variables on the analysis, such as faculty rank, and trend over time. We also intend to examine inter-departmental publications as a proxy for interdisciplinarity.
Perry Halushka , Medical University of South Carolina
Formed in late 2017 by a grant from the NIH National Center for Advancing Translational Sciences (NCATS), the National Center for Data to Health (CD2H) is charged with supporting a vibrant and evolving collaborative informatics ecosystem for the Clinical and Translational Science Awards (CTSA) Program and beyond. The CD2H harnesses and expands an ecosystem for translational scientists to discover and share their software, data, and other research resources within the CTSA Program network. The CD2H also creates a social coding environment for translational science institutions, leveraging the community-driven DREAM challenges as a mechanism to stimulate innovation. Collaborative innovation also serves as a strong foundation to support mechanisms to facilitate training, engagement, scholarly dissemination, and impact in translational science. Both VIVO and the CD2H share a common heritage in large, multi-site, collaborative community-driven activities. Indeed, community engagement across a diverse array of professionals is at the core of both groups. The planned work by CD2H in expertise modeling will generate natural extensions to the existing VIVO-ISF ontologies, and will demonstrate the value of a modular approach to ontologies in representing the many contributions and activities in scholarship. We are particularly interested in the collaborative development of new frameworks that can mutually benefit both communities, and will support open workgroups in the areas described below.
Kristi Holmes , Northwestern University
Formed in late 2017 by a grant from the NIH National Center for Advancing Translational Sciences (NCATS), the National Center for Data to Health (CD2H) is charged with supporting a vibrant and evolving collaborative informatics ecosystem for the Clinical and Translational Science Awards (CTSA) Program and beyond. The CD2H harnesses and expands an ecosystem for translational scientists to discover and share their software, data, and other research resources within the CTSA Program network. The CD2H also creates a social coding environment for translational science institutions, leveraging the community-driven DREAM challenges as a mechanism to stimulate innovation. Collaborative innovation also serves as a strong foundation to support mechanisms to facilitate training, engagement, scholarly dissemination, and impact in translational science. Both VIVO and the CD2H share a common heritage in large, multi-site, collaborative community-driven activities. Indeed, community engagement across a diverse array of professionals is at the core of both groups. The planned work by CD2H in expertise modeling will generate natural extensions to the existing VIVO-ISF ontologies, and will demonstrate the value of a modular approach to ontologies in representing the many contributions and activities in scholarship. We are particularly interested in the collaborative development of new frameworks that can mutually benefit both communities, and will support open workgroups in the areas described below.
Melissa Haendel , Oregon Health & Science University
The VIVO code base has grown over 15 years to more than half a million lines of Enterprise Java code. Experience has shown a steep learning curve for new developers especially front end developers, and difficulty in integrating newer web development technologies and approaches. Is there an opportunity to experiment with new technologies and techniques that are easier for new developers to dive into? The VIVO Product Evolution group is leading an effort to turn this opportunity into a reality. The vision is to prototype an agile web/mobile application that showcases the researchers, units, and scholarly works of an institution with their branding. This workshop will be a working session for the VIVO Product Evolution group, but also an occasion to engage with other interested VIVO community members. The workshop will include updates and discussion involving the group leadership and current subgroups, lightning talks exploring new technologies and methods, and breakout sessions for the subgroups. The current subgroups: * Representing VIVO Data * Functional requirements * Implementing the presentation layer Technologies, standards, and approaches under evaluation: * JSON and JSON-LD * SOLR and ElasticSearch * GraphQL * Schema.org as well as other scholarly information and data models (CERIF, CASRAI, COAR, CD2H) * Javascript Frameworks (React, Angular, Vue, etc) * Modern Agile principles
Under the umbrella of the “Product Evolution Task Force,” a group of implementation sites has committed to creating a new and appealing user interface for VIVO. A number of members on this task force have championed a deliberative approach, one that is driven by use cases and the real-world needs of institutions. The question of what the VIVO user interface should look like and how it should function is being tackled by the Functional Requirements subgroup. Our process was as follows: - Identify a “Hierarchy of Needs” as well as a canonical set of widely-invoked use cases - Identify a set of usability heuristics for assessing user interfaces - Collect existing approaches - Use usability heuristics to assess solutions already in production at VIVO, Profiles RNS, and other sites - Provide guidance to the User Interface development subgroup For this presentation, we will present some feedback on existing sites as well as some mockups of an updated user interface for VIVO.
We would like to provide an update on our project, ReCiter. We would also like to hear from others about what features or functionalities would be interesting to you.
Paul Albert , Weill Cornell Medicine
We would like to provide an update on our project, ReCiter. We would also like to hear from others about what features or functionalities would be interesting to you.
Sarbajit Dutta , Weill Cornell Medicine
Under the umbrella of the “Product Evolution Task Force,” a group of implementation sites has committed to creating a new and appealing user interface for VIVO. A number of members on this task force have championed a deliberative approach, one that is driven by use cases and the real-world needs of institutions. The question of what the VIVO user interface should look like and how it should function is being tackled by the Functional Requirements subgroup. Our process was as follows: - Identify a “Hierarchy of Needs” as well as a canonical set of widely-invoked use cases - Identify a set of usability heuristics for assessing user interfaces - Collect existing approaches - Use usability heuristics to assess solutions already in production at VIVO, Profiles RNS, and other sites - Provide guidance to the User Interface development subgroup For this presentation, we will present some feedback on existing sites as well as some mockups of an updated user interface for VIVO.
Michael Bales , Weill Cornell Medicine
Creating an institutional research profiling system requires herculean acts of data corralling from multiple internal institutional systems. Within an institution, the knowledge that you need resides in HR, Finance, Grants, Publication management, and Student Systems to name a few. Getting access to this information requires multiple negotiations with many institutional stakeholders. This process requires extreme patience on your part to explain to internal stakeholders why information collected for one purpose can also be used for another. When you do get access to the information that you need, in many cases it is not in the format that you would ideally like. The titles of projects can be in ALL CAPS, HR position titles can be truncated, dollar amount for grants that do not reflect the external amount for the award. How do we turn this situation around? How do we increase the awareness of the data stewards of HR, Finance, and Student Systems that they are research information stewards too? Within an institutional context, how can we define what it is to be a research information citizen? Building on a similar exercise held at Pidapalooza earlier this year, participants at this workshop will aim to identify what the shared understanding and norms surrounding how we handle and communicate the research information within an institution should be. In doing so, it is hoped that we can take the first step towards baking research profiling into the reasons that institutions collect information in the first place.
UCSF continues to expand Profiles functionality with custom-made features requested by university stakeholders. Our latest ORNG enabled extensions include: Student Projects Clinical Trials Mentee Career Paths The Clinical Trials extension is of particular importance, as it is one of the primary features behind the UC Wide Profiles that will include all UC schools with biomedical campuses (UCSF, UCSD, UCLA, UC Irvine and UC Davis). With this extension, researchers will be able to find collaborators with relevant experience to participate in cross-institutional trials. The participating UC's have received an administrative supplemental grant to build this functionality, of which ORNG has been a key enabler.
The five University of California CTSAs received a grant to implement UC Health-wide Profiles and Clinical Trials Sites. UCSF is building this system and plans to be in production within a few weeks. The system is a single platform housing all the researcher in a common data store, while supporting institutional level branding not only for look and feel but for URL as well. We feel that this is important as researchers at, for example, UCSD, will be more compelled to trust and support a site with a ucsd.edu domain than one that is owned by ucsf. Numerous challenges have been encountered in producing this system, not limited to: 1) Getting buy in and support from the key individuals as well as 'worker bees' at the various institutions. 2) Finding a business model that will allow us to continue to run this system after the grant completes. 3) Finding compelling use cases that justify having a large single system versus independent sites. 4) Very many technical and operational challenges due to scale, branding, and overall complexity.
Brian Turner , UCSF - CTSI
UCSF continues to expand Profiles functionality with custom-made features requested by university stakeholders. Our latest ORNG enabled extensions include: Student Projects Clinical Trials Mentee Career Paths The Clinical Trials extension is of particular importance, as it is one of the primary features behind the UC Wide Profiles that will include all UC schools with biomedical campuses (UCSF, UCSD, UCLA, UC Irvine and UC Davis). With this extension, researchers will be able to find collaborators with relevant experience to participate in cross-institutional trials. The participating UC's have received an administrative supplemental grant to build this functionality, of which ORNG has been a key enabler.
The five University of California CTSAs received a grant to implement UC Health-wide Profiles and Clinical Trials Sites. UCSF is building this system and plans to be in production within a few weeks. The system is a single platform housing all the researcher in a common data store, while supporting institutional level branding not only for look and feel but for URL as well. We feel that this is important as researchers at, for example, UCSD, will be more compelled to trust and support a site with a ucsd.edu domain than one that is owned by ucsf. Numerous challenges have been encountered in producing this system, not limited to: 1) Getting buy in and support from the key individuals as well as 'worker bees' at the various institutions. 2) Finding a business model that will allow us to continue to run this system after the grant completes. 3) Finding compelling use cases that justify having a large single system versus independent sites. 4) Very many technical and operational challenges due to scale, branding, and overall complexity.
Eric Meeks , UCSF - CTSI